MWC 2024: AI everywhere

BARCELONA—Artificial intelligence is dominating conversations and sessions at Mobile World Congress Barcelona 2024, with telecom companies and their vendors all working to figure out—or support the exploration of—just what AI can do for telecom, both in terms of benefits and risks.

“It’s the most talked-about topic,” said Rahul Kumar, IBM’s global consulting lead for telco. And, he added, companies are finding that they can jump in quickly. “If you look at the past versions of AI, clients took a lot of time just experimenting. One of the reasons was, it required a lot of effort and data to get AI to work. This time around, the technology is pretty fast,” he said, and has much better ease of use in order to get outputs. The consumerization of tools like ChatGPT means that more people are familiar with using AI and then make the jump to thinking about the potential value that AI can drive within their businesses.

Kumar says that IBM is already working with telecom clients who are using its AI platform; for example, for customer care or code creation. IBM has also started working with clients on the use of AI around network operations. He offered this scenarios: Network engineers often have to reference manuals from various equipment and solution providers as part of troubleshooting network issues. A telco-centric solution that draws on both traditional AI and generative AI might use a Large Language Model to ingest all of the relevant tech manuals and provide a natural-language interface to network engineers, with additional insights on whether the issue has occurred in the past and how it was resolved based on ticket data, and give possible solutions—or even, eventually, automate those next-best-actions.

The opportunity presented by AI “is bigger and moving faster than anything that we have seen before,” said Michael Dell, chairman and CEO of Dell Technologies, in a keynote address. “It took us a long time to get 5 billion people in the internet—but getting 5 billion people on AI is happening almost instantly.”

But that speed raises serious questions around ethics and governance. Where are the guardrails? They may not be built in, or might fail to anticipate the ways in which humans actually use AI tools—and models’ outputs are only as good as the inputs. Kumar said that as IBM developed its foundational model, it had to cleanse available input data in ways that trimmed it down to about a third of what was initially available, in order for it to meet IBM’s standards for solid, ethical and safe data to use as a model basis.

Governance of models, then, becomes critical. Who is watching the inputs and outputs, and on a continuing basis, making sure that models don’t drift from their intended use or hallucinate, producing inaccurate or nonsensical results? Kumar offered the example that a model which provides network information may be impacted by a new network engineer only inputting high-level summaries of incidents, when the model was trained on a more granular level of detail.

At the VMware/Broadcom booth, a demo from Aira Technologies of the company’s RAN GPT, a large language model for querying and controlling the Radio Access Network, provided an intriguing glimpse both of what might be operationalized, and of built-in guardrails. Type in a query asking for a graph of energy usage across multiple sites in multiple bands (with emulated network information from Viavi Solutions), and the first response from the system is an affirmation that the query is non-harmful and can be answered–and then the LLM lays out its methodology and related code before presenting the desired data, which can then be used as a basis to ask RAN GPT to turn off a specific site.

In an MWC keynote, Microsoft’s Vice Chair and President Brad Smith compared AI to the printing press in its potential to create an entirely new economy, based on dissemination of information. He also offered up 11 principles for how Microsoft will “operate [its] AI datacenter infrastructure and other important AI assets around the world”–it’s “AI Access Principles.”

Microsoft has announced around $5.6 billion in AI data center and cloud investments in Europe, including up to $2.1 billion that will quadruple its investments in AI and cloud infrastructure in Spain during this year and next. It’s the company’s largest-ever investment in Spain, according to the company; Microsoft will open a cloud region of data centers in Madrid and also has plans to build a data center campus in Aragon to serve European companies and public entities.

In an accompanying blog post, Smith wrote that those AI access principles “build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the company’s market position in the PC operating system market, we published a set of ‘Windows Principles.’ Their purpose was to govern the company’s practices in a manner that would both promote continued software innovation and foster free and open competition.”

His post continued: “Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond.”

That tectonic shift is evident this week at MWC, even if it is still very much in the phase of taking shape. AI may be touted in nearly every booth at this week’s show, but the full implications for chips, data centers, devices and networks—both internally, and in the data being carried—are only starting to be considered.

Comments are closed.