Joey on SQL Server
Is AI Really the Next Tech Gold Rush?
Tech giants are betting big on AI. But Is the $325 billion investment chasing real growth or just hype?
- By Joey D'Antoni
- 02/25/2025
Depending on who you talk to, AI is an economic boon that will grant the workforce never-before-seen levels of productivity or an evil, job-killing technology unworthy of usage. The truth, like most things, lies somewhere in the middle. With Microsoft, Meta, Amazon and Google's parent company Alphabet expecting to invest a cumulative $325 billion into building artificial intelligence infrastructure and solutions, it's worth exploring the actualities of large language models (LLMs) solutions with accuracy and nuance.
Note: AI is an all-encompassing term for several different services, and most of those services are far more functional and mature than the current state of GenAI offerings. For brevity, in this article, I will focus exclusively on generative AI built on LLMs.
That $325 billion capital investment in AI would expect incredible rates of return for those vendors. Or are they just throwing money after the latest trend? (See the metaverse.) At the same time as they are making giant investments, Microsoft has only reported a $13 billion run rate from its AI products. That isn't the kind of return on investment you would expect for such a large capital outlay.
As a consultant, I have observed IT organizations investing in AI in hardware, cloud and software services. Interestingly, as I wrote this column, Microsoft has begun pulling back some of those investments -- canceling some contracts for a couple of to-be-built datacenters, leading to speculation amongst pundits if the spending bubble is about to burst. Last week, Satya Nadella, appearing on a podcast, questioned the land rush towards AI: "If you're going to have this explosion, abundance, whatever, commodity of intelligence available, the first thing we have to observe is GDP growth," Nadella said. "Us self-claiming some AGI milestone, that's just nonsensical benchmark hacking to me," he stated, adding, "The real benchmark is: the world growing at 10%."
In the last twelve months, I've interacted with generative AI in several ways: I've delivered training, built AI solutions, interacted with GitHub Copilot and used many other AI tools. My experiences are extremely mixed and can be pretty extreme on both ends of experiences. In speaking with my friend Caley Fretz, who is editor-in-chief at cycling Web site EscapeCollective.com, in prepping this column, he said, "It (GenAI) is good at turning a lot of words into fewer words (like writing a headline), but it is bad at making its own words". I think that quote is accurate -- these tools shine when given a known problem set and reduce to something more easily understood. Consider use cases like creating a title for an article or generating a meeting summary. But replacing software engineers, lawyers or accountants? That requires layers of additional intelligence that current AI models lack and will likely never achieve.
There are some patterns where GenAI can be helpful, but it can also make up things that are completely wrong while passing them off as accurate content. With all of this in mind, I wanted to present my honest assessment of the current state of the art of LLMs, what I think the potential for the market is and what I think the future looks like.
The first thing is that using an LLM as a programming aid (specifically) GitHub Copilot improves my pace of development by about 10 to 20 percent. Copilot shines when it comes to generating code templates to get started, converting files from one language to another (for example, converting bash scripts to PowerShell), and those sorts of basic tasks. Where it ultimately falls is when you run into any issues beyond mostly simple syntax errors. You can think of various language models like advanced books online that have more context and generate (sometimes) more accurate code for your situation. But I have noticed hallucinations (hallucinations are when a model "infers" incorrect data it thinks exist, but does not actually exist) around PowerShell cmdlets (typically, referring to a Get- or a Set- that doesn't actually exist) or making up columns in SQL Server system views.
In my current role, I'm building a new application, and one of the things I'm planning for the app is semantic search. In a database context, you send data out to an AI service via an API to get "vectorized." Vector similarity search is a method used to measure how similar two items are by analyzing the vectors shown in the demo. In the case of that Microsoft demo, this allows for a recipe search functionality -- the search for vegan recipes or high-protein recipes will enable you to perform advanced searches that would be difficult just based on the text of the query. This search uses distance metrics like cosine similarity or Euclidean distance (time to pull out that old geometry textbook). In my app, my initial search will be against a full-text index (just a regular full-text search), followed by a secondary vector search for similar records based on vectors.
This type of vector search is effective on a narrow set of data. However, as data sets get bigger, like all AI solutions, semantic search faces accuracy problems when volumes grow, or input text gets too similar, reducing user confidence. There are several approaches to improving that data's quality, including semantic ranking. I recently recorded a video on semantic ranking for Microsoft. While impressed with the technology, I couldn't help but notice that the model required a 24-core GPU VM to execute on a relatively small data set. You could run on lower-tier hardware, but you likely face challenges around performance, especially for something like a recommendation engine. The cost of that VM is around $1800/month on a one-year reservation, but it's just an example of how computationally expensive it can be to get improved accuracy out of AI models.
You should note, there will be some use cases where it is trivial to validate output -- if you wrote a blog post, you know what the title should be, but if you try to have to understand if code you had an LLM generate, how will that scale over time? Is the code efficient? That’s the nuance were human experts are always needed.
Like many other technologies I have seen come and go during my long IT career, AI and generative AI will be around for a while. The question is always, will it be like the Internet, which had an early boom, a crash, then took over the world, or will it be like "big data," which was too challenging for many organizations but built a framework for better solutions over a decade after its mainstream introduction? The hype and some of the poor decisions around AI replacing people have led to a more negative perception amongst the general public. However, figuring out useful tech that can make money is the challenge. Nadella's recent comments highlight that unless something changes, AI will be closer to Hadoop than it is to Netscape.
About the Author
Joseph D'Antoni is an Architect and SQL Server MVP with over two decades of experience working in both Fortune 500 and smaller firms. He holds a BS in Computer Information Systems from Louisiana Tech University and an MBA from North Carolina State University. He is a Microsoft Data Platform MVP and VMware vExpert. He is a frequent speaker at PASS Summit, Ignite, Code Camps, and SQL Saturday events around the world.