Q&A

AI from the Trenches, Part 2: The Human Factor

AI expert Cameron Turner continues the conversation, and provides his perspective on how generative AI is impacting the enterprise, and its role as a transformative technology.

In the second part of our sweeping generative AI interview with Cameron Turner, applied AI expert and vice president of data science at consultancy Kin + Carta, we touch on the more human effects of the technology, from where enterprises are heading, how governments and experts are approaching the notion of regulation, and what lasting impact generative AI has on us all. (Catch up on our conversation here.)

Redmond: Recently many major companies have made big splashes with investment in generative AI technologies, with many like Microsoft on a play to buy as many AI startups as it can. Is this a trend that is an immediate boon or do you see it continue as we move forward?
Turner: What we're going to see on the acquisition side is going to reflect the general contours of the market and one of the key trends that is developing very quickly is that  generative AI and AI more specifically, is becoming much less a vertical and much more of a horizontal line. What I mean by that is, if you look at enterprise budgets, you're not going to necessarily see, "…in 2024, we're going to invest this much into generative AI." What you're seeing is we're going to look to automate our engineering process. We're going to look for ways to automate and refine our operations or our marketing and in the context of those specific outcomes, those business objectives, you'll see AI play a supporting role.

So how that would play out on the acquisition side is that we'll see more refinement and specificity on what exactly startups are focused on. So we're already seeing it for sure in healthcare, starting to see it in manufacturing. You know, where the large language model itself is trained in tune in on a specific domain in order to solve that problem. So I think that's more about what we'll see versus mega platforms that that are just sort of consuming everything around them. Not to say that we're going to see any kind of slow down by the big players in terms of acquiring talent and technology. But in terms of the contour market, I think it's very much focused on AI enablement versus AI for AI's sake.

Are there any particular startups that you're keeping a close eye on for doing something particularly unique that you haven't seen anywhere else?
Yeah, well, another trend to call out is that we've all learned as we've started to use these technologies that prompt engineering is a real key part of quality of response. So there's a whole bunch of startups that are popping up around prompt engineering, and I don't just mean like, how you phrase the question. It's really about overloading the prompts in order to generate and provide additional information that will then be synthesized into the outcome and leverage the memory that sort of inherent in AI interactions. There are a lot of companies that are recognizing that trend right now.

This also gets to the emergence of new IT roles. We have many people at Kin + Carta who have amended their job title to include prompt engineer. Which brings about questions like would you pay someone to be a specialist in using Google when that's like sort of the everyman tool? Everyone can figure out how to type in search. But it turns out for generative AI, you're really driving a very complex machine with the LLM [large language model] underneath providing the response. So there is an art form to that. And there's even startups that are focused on building Gen AI to drive Gen AI. So we're already seeing that iteration. So that's kind of the space I would look at, what's the abstractions and then how do you quickly identify generalized solutions in that abstraction? That's where the value would get generated.

On that note, do we currently have enough IT pros who consider themselves prompt engineers to address the influx need?
I don't know the answer to that. But I will say that the challenge with that role is that it's not strictly engineering, that there's ethical considerations, there's definitely legal considerations, there's, when you get into things that are customer facing, brand considerations. The unicorn role there is one that can sort of straddle all those different domains in order to generate outcome, that checks a lot of different boxes. So I think that's the constraint right now.

When we talked earlier in the year, we  asked you about the feasibility of setting up regulations around generative AI and your answer, paraphrasing, was that it's so new, it's kind of a dumb task to try to take on right now. Has your thinking changed on that in the last couple of months, especially now that governments seem to be attempting to step in?
I think things have changed a ton in a short period of time. Overall, among the community, there is a call to have greater regulation. So there's nobody denying that something needs to exist in this space.  Andrew Ames at Stanford recently wrote a letter to his community on this point. His concern was not whether or not there should be regulation, but that we really have this problem that the people who are being tasked with generating the regulation don't have the technical knowledge to be that effective. So it read like a call to action to the AI community to get involved in order to support that endeavor and ensure that regulation doesn't come down that is misguided to the technology -- both in terms of what the potential upside would be and making sure that we're actually creating remediation for the risks as they exist, and not as they're perceived.  Right now the perceived risk is very different from the actual risk.


"So D.C. has a very different view, regardless of political affiliation, on what AI is and isn't compared to Silicon Valley. But Silicon Valley doesn't totally get the whole world either."

Cameron Turner, Vice President of Data Science, Kin + Carta

That really brings home your previous point that for those in IT who are becoming the experts in this field, it's not just about learning the technology. It has to be a holistic approach to generative AI, including the human and legal factors.
I think that's it and also that no one group has the full perspective. So D.C. has a very different view, regardless of political affiliation, on what AI is and isn't compared to Silicon Valley. But Silicon Valley doesn't totally get the whole world either. We can be very sort of myopic here in terms of how things are not seen. So I think there's learning that needs to happen on both sides. And that'll probably turn into more collaboration. I think it's the only way through is to weigh knowledge transfer on all potential and perceived risks. And it's a lot of work that hasn't happened yet.

There has been chatter from a handful of luminaries in the tech industry about the dangers of generative AI, with some prophesizing nothing short of apocalyptic outcomes brought about by the tech. What do you think of this sort of discourse and is there some truth in some of the more fiery condemnations that are out there?
Well, let me set it up this way. I was at the MIT conference in Cambridge earlier this year where Geoffrey Hinton gave his talk about the existential threat of AI. Geoffrey Hinton was the former Google engineer very involved in a lot of the early work that led to everything that we're talking about today. He made a statement, that if I can paraphrase, that humanity is a temporary milestone in the advancement of intelligence. This left the room breathless, so that there's that perspective. And Musk is sort of in that camp as well.

On the other side, a piece worth reading is from Marc Andreessen, who wrote a letter to the community titled "Why AI Will Save the World." In it, he's making the point that, effectively, why would you not want to have an assistant that knows all in whatever you're doing, personally or professionally, working by your side all the time?

I think that at the end of the day where there is fear and mistrust has been justified in some technology, there should be some serious evaluation in whatever we do. But I think the way it will affect things over time will easily integrate with our day-to-day lives, and we will see some human functions of what we currently do drop away naturally.  In other cases, it may not and we may need to reevaluate.

Take a look at what happened with COVID and QR codes that replaced menus and wait staff in restaurants. In some cases, it's much more efficient and it works great. But what did we lose in that process. And that's where the human evaluation will come back in any form of automation.

We have to sort of take a pause and think about what is at stake? Is it always about efficiency and profit and velocity or do we want to take a breath and think about how to evaluate? So that's very much in my head. The biggest risk in the long term is automating ourselves to a point where we've stripped out a lot of things that are meaningful in our lives that AI can replace but shouldn't.

About the Authors

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

comments powered by Disqus

Subscribe on YouTube