Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
AI / Data / Edge Computing / Operations

AI Engineering: What Developers Need to Think About in 2024

For another year, at least, AI will continue to grab the headlines. Here are some aspects of AI engineering that might affect developers in 2024.
Jan 2nd, 2024 1:30am by
Featued image for: AI Engineering: What Developers Need to Think About in 2024
Image by Diana Gonçalves Osterfeld

Over the last year, many large language model (LLM) projects have been emerging quickly from the depths of research. Looking toward the upcoming year is less about following trends and more about spotting which periscopes will be raised first. Not everything will be about or around AI, but for another year at least, the usual suspects will grab the headlines.

I’ve listed the areas that might affect developers either through involvement or service use in their next projects. Most of these are just the next problem areas or where interests naturally collide.

Speeding Up, Sizing Down and AI Self-Generating Data

We have now entered the “let a thousand LLMs bloom” period of AI, and the problems of access to data and the size limitations of many smaller operations will bloom as well.

The solution to autonomous driving that Wayve is taking is to feed visual information back into a model, which can then generate driving videos, which can then be used as training or prediction data. This makes sense because even with dashcams, there probably isn’t enough video feed for deep learning. But this stuff takes a lot of computing.

Instead of using words as tokens, Wayves’s system uses images as the input tokens, and just like predicting the next word, here the system predicts the next image.

As the diagram suggests, the system looks at the visual input (image encoder)  and what the car is doing (action encoder) to help predict the next step via the internal world model:

Using smaller, more efficient models is the only way out of a spiraling size war, which another chip shortage could exacerbate. The other angle is, of course, getting smaller models onto mobile devices. So expect work to continue in this area.

More Apps that Make Your Own Data Available for AI

If you remember the “quantified self” trend of a few years back, you will see why AI let loose on your personal data could be valuable. This may be no more than connecting a chat interface to your diary, but will probably feature smarter collaborations in the health and life-logging space.

To be able to ask your sleep tracker questions like “How much more sleep did I get this month, compared to last month?” could really help you make better life choices without having to stare at charts. More specific queries like “Which of my commuting routes will be least affected by roadworks?” is a better way to keep solutions centered on what you need, as opposed to shifting environments.

This is a much easier offering from companies that already own your data. In some cases, the ChatGPT trend is a great way to quietly expose the fact that they already can do this, and could for some time.

Autonomous Agents Will Still Hit the Security and Privacy Fence

The word “agent” gets overused within AI computing. Here I use it to mean programs that use LLMs to understand goals, generate tasks and try completing them.

But the problems they face are the same ones from the years of federated data: How does an agent controlled by myself get permission to use someone else’s data?

I don’t think agents will move forward significantly, because their legal and security boundaries have not been worked out. More to the point, much as I respect the companies working hard to leverage LLMs, they seem pointedly uninterested in the legal frameworks

This could mean that there is a niche opportunity to solve this problem, but experience tells me that it isn’t yet quite in anyone’s interest to solve this. Yet again, it is just better to own or steal as much data as possible. We will come to this again later.

AI Helping Enable Real-Time Monitoring at the Edge

One of the scandals that hit the U.K. recently has been illegal discharges of raw sewage into the rivers of Britain by rogue water companies. No sophisticated equipment is needed to detect this noxious onslaught, but the interest in real-time monitoring of river water quality has clearly increased.

The same goes for measuring sunlight for solar panels, city air pollution and traffic volumes all around the world, etc., as we worry about carbon control and efficiency.

The more traditional types of expert systems can help to filter this data, but using machine learning to derive insights autonomously or alter detection parameters can help get in front of problems. This also drives the increase in digital twins, which can detect and react to the environment in real time.

Edge Computing and Putting Workloads in Different Places

Somewhat linked to two earlier points, placing the computing nearer the processed data, thus allowing computations to be done where the data is actually collected, is one of those efficiency savings that might start to be quite important in establishing “edge AI.”

This might be the key to allowing better mobile query responses. It implies smaller and more compact machine learning models, which can take up less power, have reduced latency and somehow minimize bandwidth.

If you are wondering if this increases the attack surface for hackers — well, yes it does. So security risks have to be carefully woven into these integrated platforms.

More Work for the Chief Data Officer

Suddenly your organization’s data is not simply of vague strategic importance; it now may be powering LLMs without permission. Whatever you think of Elon Musk, his usually garbled logic concerning decisions about how to run social media platforms was correct on this point. And others are finding this out too.

The chief data officer (or equivalent) now needs to work on a fight-or-flight strategy, not just another ecosystem play. Before they can think about making their customers’ data more valuable, they have to make sure it isn’t flying away, as I mentioned above. This will either involve locking it down or offering competing services. Eventually, there will be legal recourse, but that will arrive too late for most smaller organizations.

The bottom line is that an LLM will use your valuable data. So it may as well be an LLM in your control.

The Doomsayers Relent

As the actual products of AI fail to quite match up to the hype, one good thing will be the slowing down of the existential risk circus. Trying to raise media interest by saying how dangerous something is reminds me of kids in a playground, and not a serious endeavor to benefit mankind.

Just like the Golgafrinchams, human extinction is still more likely to be caused from a plague contracted from dirty telephones, as Douglas Adams noted in his fiction. And with that, I wish you a Happy New Year.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Enable, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.