[gtranslate]

The path to 2030: tackling the complexities of 6G testability


The end of the decade will be here sooner than we think, so it’s important to ensure we’re ready for 6G

The future of wireless is closer than you think. The advent of 6G promises higher performance and flexibility to enable use cases far beyond what we do with today’s wireless systems. With the commercial availability of these next-generation networks expected by the early 2030s, the industry is shifting from research into the development and standardization phase.

6G will usher in an unprecedented era of global connectivity with new use cases, devices, and services. The wireless network promises to transform communications and underpin an expansive, intelligent edge with unprecedented data rates, ultra-reliable connectivity, deeper immersive experiences, ubiquitous coverage, AI-native capabilities, and quantum security. 

Effectively validating and optimizing these innovations for real-world deployment is a key driver of 6G’s success. However, the complexity of testing scenarios has increased significantly with use cases, including autonomous mobility, mixed reality, and eHealth, which require a more user-centric approach. Therefore, for 6G to deliver on its potential of creating a more intelligent and responsive digital environment, testing methods must expand beyond established parameters and address the following.

6G

Work is underway on 6G networks. Whether the industry needs them is another matter

Integrating AI/ML

A core component of 6G is the integration of artificial intelligence (AI) and machine learning (ML). Research is underway to develop new models to see how the technologies can help optimize network performance, manage resources, improve security, address the complexities of radio beam management, and reduce power consumption. For example, the latter could be achieved through intelligence that switches components on and off based on real-time operational data, which would help optimize energy usage.

Rather than relying on large language models, 6G applications of ML are trained on a combination of technical information from networks, circuits, and synthesized data from simulation and emulation tools. These models require extensive evaluation to ensure they are robust and reliable, which necessitates training on diverse datasets, measuring the performance against traditional methods, and establishing new testing methodologies. Taking these critical steps will help ensure the responsible and effective adoption of this emerging technology.

Testing frameworks must also weigh the intelligence’s benefits against its additional complexity and increased cost. As part of this, KPIs should track energy consumption, computational demands, and speed and reliability. As intelligence and autonomy increase, testing strategies must adapt and expand to ensure even rare situations are evaluated to guarantee performance in real-world deployments, while 3GPP works on building a framework so that AI can be added to cellular standards.