Joey on SQL Server
Microsoft Bolsters Fabric at Ignite
While Fabric has improved in the year since its release, there are still some glaring issues with the platform.
- By Joey D'Antoni
- 11/19/2024
Today at Microsoft Ignite, Microsoft introduced SQL database in Fabric. This relational database service runs in the context of a Fabric capacity. SQL database in Fabric is built on the Azure SQL Database platform, running in pools using Fleet Management, allowing Fabric users to deploy a database in their Fabric environment in seconds.
While I suspect initial use cases for the new database service will be related to some of the business intelligence workloads in Fabric, Microsoft's vision with this is that Fabric is a data hub for all of an organization's data.
Part of this service includes the mirroring feature that Microsoft introduced to Fabric last year. SQL database in Fabric uses mirroring to send data from database tables using a change data capture approach to Delta Lake files in your OneLake. This concept has several names, whether hybrid transaction analytical processing (HTAP) or "transltyics" are called. You can then transform the data in OneLake into an analytical model using a lakehouse, notebook, or pipeline and then ingest it into a data warehouse for storage. For some customers, having an SQL database in Fabric could replace the data warehouse if their data volumes are relatively small.
Let's talk a little more about the technical details. As of the launch, the SQL database in Fabric will be a general-purpose serverless database, meaning that it will auto-scale cores and memory based on database activity. That activity is part of your Fabric capacity and is subject to Fabric's bursting and smoothing algorithms. There are a few caveats at launch -- you can only use Microsoft-managed certificates for transparent data encryption, which Microsoft will address. Because of the general-purpose database, the data volume tops at four terabytes (I expect Microsoft to move the SQL database in the Fabric platform to hyperscale, which will remove this limitation). Finally, you don't get to choose the name of your server -- you do get to select the name of your database. There will also be some diagnostic information, similar to Microsoft's database watcher offering, along with the data available in the query store and DMVs.
As I mentioned, I'm happy with Microsoft's technical vision here. The notion of building data-centric apps on Fabric and then being able to reduce the effort needed to get data into an analytical structure like a star schema is a good one. The Power BI part of the service makes it very popular with business users. Adding a database service to Fabric when there is already a data warehouse and an SQL endpoint is slightly confusing, but it is a good idea directionally.
The Problems with Fabric
While I think Fabric is the right direction for Microsoft, I must be completely honest -- the product stack lacks maturity across the board. Fabric has been in general availability (GA) for a year now. Still, it lacks many features of competing platforms like Databricks. For the most part, these features aren't core data features, but they are features that make adopting the Fabric platform more challenging. Some of this comes from the mindset of trying to make a very complex data platform into a software-as-a-service (SaaS) product, where administrators and developers have limited choices on how they can configure and manage the service. Others are just architectural decisions. For example, in-depth monitoring is available within Fabric but runs inside your Fabric capacity.
This monitoring architecture leads to a few problems. First, the monitoring service itself will be consuming some resources that business users would otherwise use. During periods of high utilization, monitoring information has a great deal of latency, which makes it harder for administrators to terminate poorly coded jobs.
This monitoring concern ties into another concern about Fabric: the bursting and smoothing model. Organizations pay one price for a Fabric capacity based on several capacity units -- ranging from two on the low end to 2048 at the top. That number isn't a specific number of CPU cores, though there is a mapping for Spark workloads. Because Fabric auto-scales, you can get more resources than you paid for for a short period in a process called bursting. This payback process is called smoothing, where your available compute capacity is limited until you have "paid back the loan." Meanwhile, foreground operations (interactive jobs run by users) are smoothed over five minutes. Background operations have smoothing cycles over the next 24 hours, which aims to eliminate scheduling concerns, but in reality, has, in some cases, blocked users from using Fabric for 24 hours.
This concept isn't a problem, but with limited monitoring options and general confusion about what types of operations consume the most CUs, an organization cannot refresh its dashboards before a critical meeting. I wrote a post on my blog about how organizations should manage their capacities to deal with this. It can be hard to evaluate the impact of certain types of operations, which can be massively impactful to a capacity.
My other concerns with Fabric are the development and deployment processes and security controls. I'm tying these two concepts together because Fabric does not use service principals; instead, it uses "workspace identities," which are app registrations in Entra. Additionally, several operations can't authenticate to various Azure services using those workspace identities.
I recently tried to copy some files from an Azure Data Lake into a Fabric workspace and used a Spark notebook. I used a shared access signature to authenticate to the data lake storage account because that operation didn't support the more secure integrated authentication. I'm also in the process of building a best practices doc for CI/CD on Fabric. Source control in Fabric is a complex mix of using API calls to Fabric from your deployment tool and different approaches for different Fabric resources because of a blend of unsupported features and limited automation options within the service. Security and source control feel bolted onto Fabric. They are the most challenging implementation concerns, even for some smaller organizations, much less enterprises. With some of these processes in their current state, mature, experienced Data Engineering teams and their organizations will not be happy migrating to Fabric.
What can Microsoft do to make Fabric better? I would like to see the following:
- Allow external monitoring tools like Azure Monitor so admins can quickly identify anomalous performance conditions.
- A more robust security model to leverage Entra's gains in recent years.
- More transparent in-app explanations of bursting and smoothing and letting users know about the performance cost of potentially expensive operations.
- A more integrated and robust CI/CD experience.
- Some level of resource governance to prevent developers from crippling the service via bursting.
From a business perspective, Microsoft's vision for Fabric is good. However, I see it lacking in execution in many places -- there has been a lot of great work to do some cool things, but the product simply does not feel finished as a whole. Based on my discussions with colleagues at work, and across the MVP ecosystem, security, usability, supportability and performance management need to take precedent over new and shiny. I'd also like to see Fabric be more of a PaaS experience than a SaaS experience. With a surface area as large as Fabric's, users and administrators need more control and visibility into operations to manage the platform best.
About the Author
Joseph D'Antoni is an Architect and SQL Server MVP with over two decades of experience working in both Fortune 500 and smaller firms. He holds a BS in Computer Information Systems from Louisiana Tech University and an MBA from North Carolina State University. He is a Microsoft Data Platform MVP and VMware vExpert. He is a frequent speaker at PASS Summit, Ignite, Code Camps, and SQL Saturday events around the world.