Research Thesis

The AI Supercycle - A Thesis On The Next 20 Years

Introduction

The promise and peril of AI has historically remained on the horizon. Until now. The future has returned and we are building something inevitable.

Revenant Research’s thesis outlines a 20 year AI supercycle that will occur in three phases. This thesis gives executives, investors, and researchers and policymakers a framework to accurately project AI development and adapt their organizations and directives to AI’s opportunities and risks.

Phase 1: The Foundational Build-out (2023 - Next Several Years)

The current phase is characterized by the rapid expansion of core AI infrastructure. This is the foundational stage, where the essential tools and resources for widespread AI adoption are being developed and deployed at an accelerated pace.

Semiconductors: The computational intensity of AI is driving unprecedented demand for specialized, high-performance chips.

  • GPUs: NVIDIA's dominant market share in GPUs, bolstered by the CUDA platform, positions them as a primary beneficiary. GPUs are the dominant standard for AI datacenters and frontier model training.

  • ASICs: The rise of custom silicon, exemplified by Google's TPUs and Amazon's Inferentia and Trainium chips, highlights a trend where companies are developing specialized chips tailored to their specific AI needs, optimizing for efficiency and performance at scale. Hyperscalers are going vertical.

  • FPGAs: Companies like Xilinx (now part of AMD) and Intel, with its Agilex line, are providing reconfigurable hardware platforms that offer a balance between the performance of ASICs and the flexibility of GPUs, particularly for applications requiring adaptability and low latency.

  • High-Bandwidth Memory: Memory and storage solutions providers are experiencing increased demand for high-bandwidth memory technologies like HBM2 and GDDR6, which are essential for feeding data to AI processors at the required speeds.

Data Centers: The exponential growth in data volume and the computational requirements of AI are fueling a massive expansion and upgrade of data center infrastructure.

  • Specialized Hardware: Data centers are rapidly deploying servers equipped with multiple GPUs or ASICs, high-speed interconnects, and advanced storage systems, all optimized for AI workloads.

  • Cooling and Power Management: Vertiv, a key provider of thermal management and power solutions, reported 43% organic orders growth in Q4 2023, highlighting the critical need for advanced cooling and power infrastructure to manage the increased heat and energy demands of AI hardware.

  • Data Center REITs: Leading data center REITs are expanding their global footprint and upgrading their facilities to meet the stringent requirements of AI deployments. They are capitalizing on long-term leases with major technology companies, driven by the increasing demand for AI-optimized data center space.

Networking: Efficient data movement is crucial in the AI ecosystem.

  • High-Speed Interconnects: Within data centers, high-speed interconnects are essential for rapid communication between servers and storage.

  • Broadband and 5G: The global rollout of 5G networks and the continued expansion of broadband infrastructure facilitate the deployment of AI applications requiring real-time data processing and connectivity. Broadcom, a leader in developing advanced networking chips for data centers and 5G infrastructure, is experiencing significant demand for its products, driven by the need for faster and more efficient data transfer.

Nvidia dominated the GPU market from 2020 to 2025, achieving a 98% market share in data center GPUs and 88% in desktop GPUs in 2023. Despite supply chain issues and competition, Nvidia's data center revenue reached $79.6 billion in the first nine months of fiscal 2025, with "Blackwell" GPUs having a 12-month order backlog. In 2023, Nvidia shipped 3.76 million data center GPUs, a 42% increase from 2022.    

Concurrently, AWS and Meta developed their own AI chips. AWS Trainium, focused on high-performance AI training and inference, powers EC2 Trn1 instances with up to 50% lower training costs. Meta's MTIA chip targets inference workloads and recommendation models, offering greater compute power and efficiency than CPUs. These chips represent a shift towards specialized hardware for AI, with the global AI ASIC chip market projected to grow at a CAGR of 32.4% from 2024 to 2030.

Nvidia is on a unprecendented 12 month product cycle for their platform. Microsoft CEO Satya Nadella has publicly stated it is important to buy chips every year. The sleepy commodity based semiconductor cycle is fundamentally evolving.    

This semiconductor evolution is coupled with a massive expansion of AI data centers. Worldwide data center CAPEX grew to $260 billion in 2023 and is projected to surpass $400 billion in 2024. This spending is fueled by hyperscalers like Microsoft, Google, and Amazon, who are investing heavily in AI infrastructure to meet the rising demand for AI services [55]. Notably, Microsoft alone plans to spend $80 billion in fiscal 2025 on AI-enabled data centers.  It's not just Big Tech; x.ai's Memphis Supercluster, with 100,000 Nvidia H100 GPUs and plans to expand to at least 1 million, exemplifies this trend.  

Importantly, this CAPEX is not fueled by speculative investments. Rather, it comes from companies with robust balance sheets and strong free cash flow who are deploying capital strategically to build out their cloud infrastructure and AI capabilities. This suggests a sustained, long-term commitment to AI, not a fleeting trend.    

Does this massive, strategic investment in AI-specific infrastructure represent a fundamental shift in the landscape of business computing? 

NVIDIA, Meta, AWS, and Microsoft Azure are putting their money where their mouth is. 

Are we on the cusp of a new era of enterprise software, driven by specialized hardware and AI-optimized data centers? 

This infrastructure buildout is not about training chat bots.

And how will this impact the future of enterprise IT infrastructure, potentially leading to greater reliance on third-party providers, consolidation and outright destruction of B2B SaaS applications and a rethinking of traditional on-premises solutions?

I don't think any CIO or CTO should be taking a vacation in the next 5 years.

The primary constraint to AI scaling is not chips, however, it’s energy.

The Energy Constraint: The significant energy consumption of AI data centers has emerged as a critical bottleneck, creating both challenges and opportunities.

  • Escalating Power Demands: The IEA's 2022 report highlighted that data centers accounted for 1-1.5% of global electricity use, and this is projected to rise significantly with AI adoption. This places immense pressure on existing power grids.

  • Implications:

    • Renewable Energy: The surging demand for clean energy to power data centers will benefit domestic energy companies positioning themselevs to service these massive datacenters.

    • Energy Storage: Growth is anticipated for companies developing advanced battery technologies and energy storage solutions, crucial for integrating intermittent renewable energy sources and ensuring grid stability.

    • Energy Efficiency: Vertiv's cooling technologies and other energy-saving solutions are experiencing strong demand as data centers strive to minimize power consumption. Their strong organic order growth reflects this trend.

This phase is fundamentally about capacity expansion to change the evolution of computing. The primary beneficiaries are companies providing the necessary hardware and infrastructure. However, the energy constraint introduces a critical dynamic. Companies that can reliably service 150mw datacenters and higher will be winners. This phase may last for several more years, but the seeds of future phases are already being sown.

Phase 2: AI Application Proliferation (Starting in 2025, Gaining Momentum in 2-5 Years)

This phase marks the transition from AI infrastructure build-out to widespread AI application across industries, promising measurable business value. This transition will also cause significant labor market disruption. However, AI automation and job displacement are not catch-all phenomena, they are fundamentally based on where a country's economy has overinvested. This uneven distribution of AI's impact creates a two-tiered industrial economy, with some sectors thriving (but with significant job displacement) while other sectors face stagnation (but retain job stability in the short term).

For example, the US has significantly underinvested in manufacturing and robotics, ranking 9th globally in robot density in 2023. This underinvestment makes it more challenging for AI to displace jobs in these sectors, as there is simply not enough automation infrastructure in place for AI to take over. Conversely, the US has overinvested in white-collar knowledge workers. This overinvestment creates a fertile ground for AI automation, as these jobs often involve tasks that can be readily automated, such as data analysis, report writing, and customer service.

This targeted nature of AI automation requires a nuanced understanding of its impact. Executives and policymakers should focus on identifying and adapting to the specific areas where AI is likely to have the most significant impact. This means understanding the unique vulnerabilities and opportunities presented by AI in different sectors and developing targeted strategies to mitigate risks and maximize benefits.

For instance, in the US, the overinvestment in white-collar jobs necessitates a focus on reskilling and upskilling initiatives for workers in those sectors. This could involve providing training in AI-related fields, such as data science and machine learning, or fostering the development of soft skills, such as critical thinking and creativity, that are less susceptible to automation. Simultaneously, the underinvestment in manufacturing and robotics presents an opportunity for strategic investment in these areas. By promoting automation in these sectors, the US can enhance its competitiveness and create new job opportunities in fields like robotics engineering and AI-powered manufacturing.

This targeted approach also requires a shift in focus from infrastructure to application-specific value creation. Companies that can demonstrate tangible ROI and build strong competitive advantages by leveraging AI in specific applications will thrive in this new landscape. This phase will likely be characterized by rapid innovation, intense competition, and the emergence of new market leaders.

Furthermore, understanding the targeted nature of AI automation allows for a more effective response to the skills gap. As AI takes over routine tasks, the demand for workers with skills in critical thinking, creativity, and complex problem-solving will increase . Educational institutions and training programs need to adapt to this shift by providing individuals with the skills and knowledge necessary to navigate the changing demands of the AI-powered workplace.

In conclusion, AI automation is not a monolithic force that will uniformly impact all sectors. Its effects will be concentrated in areas where overinvestment has created vulnerabilities, while underinvestment in other sectors may limit its disruptive potential. By understanding this targeted nature of AI's impact, businesses and policymakers can develop more effective strategies to navigate the challenges and opportunities presented by this transformative technology.

Phase 3: The Institutionalization of AI via case law, regulation, and treaties.

This phase marks a critical juncture in the evolution of artificial intelligence: its institutionalization. AI is no longer a nascent technology confined to research labs; it is becoming deeply embedded in our legal and economic systems, national security strategies, and international relations. This integration presents unprecedented challenges and opportunities, demanding a re-evaluation of existing legal frameworks and the establishment of new regulatory mechanisms.

The Evolving Legal Landscape of AI

The rapid advancement of AI strains the very foundations of traditional legal thought. Concepts like mens rea, the "guilty mind," become nebulous when applied to autonomous systems operating on complex algorithms. It's not simply a matter of proving intent, but of redefining what intent means in a non-human context. With human actors, we infer intent based on observable actions and surrounding circumstances. With AI, "intent" is encoded within the training data and the algorithmic architecture itself, shifting the focus from individual culpability to systemic flaws in design and deployment. This necessitates a move towards a systems-based approach to liability, where responsibility is distributed among developers, deployers, and even data providers. The opacity of some AI models, often referred to as "black boxes," further complicates matters, making it nearly impossible to trace a specific outcome to a singular "decision" by the AI. This challenges the traditional notion of causality in legal reasoning, demanding new forms of evidence and legal interpretation.

This challenge is exemplified by the rise of AI-driven crime. Deepfakes, synthetic media generated by AI, pose a significant threat. These manipulated videos and audio recordings can be deployed for fraudulent wire transfers, reputational attacks, and the dissemination of disinformation. Detecting these sophisticated manipulations and establishing the chain of custody for digital evidence presents a formidable legal hurdle. Existing laws against fraud and defamation, while potentially applicable, struggle to fully encompass the unique nature of deepfakes and their rapid dissemination online. Similarly, the training of AI models on massive datasets scraped from the internet often involves copyrighted material, raising complex questions about fair use and intellectual property. If an AI model trained on copyrighted novels generates text that bears resemblance to the originals, does this constitute infringement? The transformative nature of AI-generated outputs adds layers of complexity to this debate.

AI also plays a role in enhancing traditional cybercrime. AI-powered phishing attacks, for instance, can be highly personalized and difficult to detect, mimicking the language and style of trusted contacts. AI can automate the exploitation of software vulnerabilities, allowing attackers to quickly identify and compromise vulnerable systems. Attributing these attacks and proving negligence in data security practices presents significant legal challenges. The emergence of autonomous systems, such as self-driving cars, introduces further legal complexities. If a self-driving car causes an accident due to a software malfunction, who bears the responsibility? The manufacturer? The software developer? The owner? Existing product liability laws, designed for traditional products, struggle to address the complexities of AI systems that make decisions autonomously. Furthermore, the ability of AI to generate creative content challenges traditional notions of authorship and ownership, raising fundamental questions about intellectual property rights. Finally, algorithmic bias, where AI systems perpetuate existing biases in data, raises serious legal concerns about discrimination, requiring updates to existing anti-discrimination laws.

The nascent nature of AI case law creates both uncertainty and opportunity. Early cases are crucial in setting legal precedent, but judges often lack the technical expertise to fully grasp the intricacies of AI. This creates a risk of inconsistent or even misguided legal interpretations. The development of expert witness testimony and specialized legal expertise in AI is therefore essential for ensuring sound legal decisions.

Regulatory Frameworks and AI Governance

The rapid proliferation of AI necessitates robust regulatory frameworks to ensure responsible development and deployment. The EU AI Act stands as a prominent example of this effort, adopting a risk-based approach that categorizes AI systems into levels of risk. This approach, while innovative, faces challenges. Defining “high-risk” is inherently subjective and risks regulatory capture or the stifling of innovation. The focus on pre-market conformity assessments for high-risk systems could create significant barriers for smaller AI developers and startups. Furthermore, the global reach of the EU AI Act raises questions of extraterritoriality and the potential for a “Brussels effect,” where EU regulations become de facto global standards. This could have unintended consequences, such as favoring European AI companies or hindering the development of AI solutions tailored to specific regional needs.

Other jurisdictions are also developing their own approaches. The US, for example, has taken a more sector-specific approach, with various agencies and initiatives addressing specific aspects of AI. This contrasts with the EU’s comprehensive approach and presents a different set of trade-offs between regulatory oversight and flexibility. China, meanwhile, has focused on regulating AI in areas such as data security, algorithmic bias, and its use in national security. This divergence in regulatory philosophies highlights the challenges of achieving international harmonization.

Across these diverse approaches, several key regulatory focus areas have emerged. Data privacy and security are paramount, with regulations like GDPR and CCPA setting strict rules for data handling. Algorithmic bias and fairness are also key concerns, demanding attention to data collection and preprocessing, as well as techniques for detecting and mitigating bias in algorithms. The complex issue of liability and accountability in cases involving AI systems is being addressed through regulations that establish clear lines of responsibility. Transparency and explainability are also crucial for building trust in AI, driving the development of XAI techniques. However, there’s a tension between explainability and performance: highly complex models are often more accurate but harder to explain, raising the question of which should be prioritized in different contexts. Finally, sector-specific regulations are emerging to address the unique challenges posed by AI in areas like healthcare, finance, and transportation.

Geopolitical Competition and International Norms

AI has become a central arena of geopolitical competition, particularly between the US and China. This rivalry is not merely a technological race; it is a battle for global influence and the future of technological governance. The US, with its emphasis on open markets and private sector innovation, presents a stark contrast to China’s state-driven approach. This creates a tension between competing models of AI development and raises critical questions about data sovereignty, technological standards, and the potential for the fragmentation of global AI ecosystems. The concept of AI sovereignty, the ability of a nation to control its own AI development and deployment, is becoming increasingly important. The risk is not necessarily a complete "splinternet," but rather the emergence of distinct, largely incompatible AI ecosystems, each with its own set of standards, regulations, and technological infrastructure.

The US has implemented export controls on advanced AI chips and semiconductor manufacturing equipment to limit China’s access to cutting-edge technology. The CHIPS and Science Act aims to bolster domestic chip manufacturing and reduce reliance on foreign suppliers. Sanctions have also been imposed on certain Chinese AI entities. China has responded with its own initiatives, including the “Made in China 2025” plan, which prioritizes self-sufficiency in critical technologies. China’s state-driven AI strategy outlines ambitious goals for global leadership by 2030, and the Digital Silk Road initiative promotes Chinese AI technologies and standards in other countries, seeking to establish its own sphere of influence within the global AI ecosystem.

This rivalry has significant implications for the evolution of global AI ecosystems. The emergence of “national champions” could lead to a concentration of power in the hands of a few large companies aligned with specific national interests, potentially stifling competition and innovation within the broader global landscape. The potential for regulatory arbitrage, where companies exploit differences in regulations between countries, could lead to a “race to the bottom” in terms of safety and ethical standards. Conversely, it could also incentivize regulatory innovation as countries compete to attract AI investment. The key concern is the fragmentation of the global AI ecosystem into competing blocs, hindering interoperability, data sharing, and the development of shared global norms. The pursuit of AI sovereignty by individual nations, while understandable, carries the risk of creating barriers to international collaboration and hindering the realization of the full potential of AI for global good.

International Treaties and Norms

The geopolitical landscape of AI necessitates the development of international treaties and norms to ensure responsible development and mitigate potential risks. However, achieving international consensus on AI governance is a complex undertaking, fraught with challenges.

Existing international law provides a partial foundation for addressing some aspects of AI. Treaties related to cybersecurity, such as the Budapest Convention on Cybercrime, offer a starting point for addressing AI-enabled cyberattacks. However, these treaties were not designed specifically with AI in mind and may require updates or supplementary agreements to fully address the unique challenges posed by AI-driven crime. International human rights law, as enshrined in the Universal Declaration of Human Rights, is also relevant, particularly in the context of algorithmic bias and discrimination. These broad principles, however, require careful interpretation and application to specific AI systems. For example, applying existing international humanitarian law to the use of autonomous weapons systems raises complex questions about defining "meaningful human control" in the context of AI. The lack of clear definitions and standards creates ambiguity and hinders effective enforcement.

Beyond existing treaties, various international organizations are working to develop emerging international norms for AI. Organizations like the OECD, UNESCO, and the G20 have published non-binding principles and guidelines for responsible AI development and use. These initiatives aim to promote shared values and best practices, but their lack of legal enforceability limits their effectiveness. Translating these high-level principles into concrete standards and mechanisms for implementation remains a significant challenge. A comparative analysis of the different sets of principles developed by various organizations reveals areas of consensus and divergence, highlighting the difficulties of achieving international agreement on specific issues. For example, while most organizations agree on the importance of transparency and accountability, there is less consensus on how these principles should be implemented in practice.

Achieving international consensus on AI governance faces significant hurdles. Differing national interests, values, and legal traditions create obstacles to cooperation. The US and China, with their divergent approaches to AI regulation and their competing visions for the future of the AI ecosystem, present a particularly significant challenge. Issues of data sovereignty and national security further complicate matters, creating barriers to data sharing and collaboration on AI research. The pursuit of AI sovereignty by individual nations, while a legitimate concern, can also hinder the development of shared global norms and standards.

International organizations play a crucial role in facilitating dialogue, developing common standards, and promoting best practices in AI governance. However, these organizations often face challenges in terms of funding, mandate, and enforcement power. Strengthening the capacity of international organizations to address AI governance is essential for achieving effective international cooperation. Examining the role of the UN in this context reveals both the potential for global cooperation and the limitations of existing institutional frameworks. The development of new international agreements or the adaptation of existing ones may be necessary to establish a more robust framework for international AI governance.

III. Measuring the AI Supercycle: Two Essential Indices

To quantify and track the progress of the AI Supercycle, Revenant Research has buillt two indices: the AI Efficiency Index (AIEI) and the AI Intelligence Index (AIII).

1. The AI Efficiency Index (AIEI): Quantifying Infrastructure Prowess

The AIEI assesses the operational efficiency of AI infrastructure. It provides a quantitative measure of how effectively companies utilize computational resources, manage energy consumption, and implement technological advancements.

Components:

  1. Model Flop Utilization (MFU): Measures the percentage of available floating-point operations (FLOPs) actively utilized by AI models.

  2. Stable Flop Utilization (SFU): Assesses the consistency of FLOP utilization over time.

  3. Memory Efficiency Factor (MEF): Evaluates memory bandwidth and capacity utilization.

  4. Power Utilization Efficiency (PUE): Measures the ratio of total energy consumed to the energy used for computation.

  5. Energy Innovation (EI): Captures advancements in energy sourcing, storage, and management.

  6. Semiconductor Advancements (SA): Reflects the sophistication of the underlying semiconductor technology.

  7. Cooling and Infrastructure Innovation (CI): Assesses the efficiency of cooling systems and data center design.

2. The AI Intelligence Index (AIII): Gauging Application Sophistication

The AIII evaluates the intelligence and capabilities of AI applications, providing a framework for assessing their sophistication across various dimensions.

Components:

  1. Complexity of Task (COT): Measures the AI's ability to handle complex, multi-faceted, and open-ended problems. This can be broken down into sub-components like Reasoning Ability (how well can the AI reason and draw inferences), Problem-Solving (how effectively can the AI solve novel problems), and Adaptability (how well can the AI adapt to changing environments and tasks).

  2. Scalability (S): Evaluates the AI's ability to scale across datasets, users, and computational resources.

  3. Real-Time Responsiveness (RTR): Assesses the AI's ability to process information and make decisions with minimal latency.

  4. Autonomy (A): Measures the level of independence in the AI's decision-making, based on a hierarchical scale of autonomy levels or the frequency of human intervention required.

  5. Energy Efficiency (EE): Evaluates the AI's energy consumption relative to its performance, measured in performance per watt.

  6. Integration of Advanced Models (IAM): Assesses the sophistication and diversity of the AI models employed as well as their performance on relevant benchmarks.

Strategic Perspectives: Investing, Partnering, and Aligning with the AI Supercycle

Organizations can engage with the AI Supercycle from multiple strategic perspectives: investing, partnering, and aligning. Each perspective requires a tailored approach to maximize opportunities and manage risks.

Investing: For investors, the AI Supercycle presents a long-term growth opportunity.

  • Key Criteria for Investment:

    • AI Alignment: Companies must demonstrate a clear and demonstrable connection to the AI Supercycle, either as infrastructure providers or developers of AI-powered applications.

    • Financial Strength: Amid persistent inflation, companies should possess healthy balance sheets, characterized by low debt relative to equity, substantial cash reserves, and strong credit ratings. They should also exhibit robust and consistently growing free cash flow generation.

    • Technological Leadership: A track record of innovation, a commitment to research and development, and evidence of technological leadership in their respective domains are essential.

    • Scalability and Growth Potential: Companies should possess business models that are inherently scalable, with the potential for significant long-term growth as the AI market expands.

    • Strong Management Team: A capable and experienced management team with a clear vision for the role of AI in their industry is crucial.

    • Addressing the Energy Bottleneck: Given the increasing importance of energy efficiency, companies that are actively developing or implementing solutions to address the energy constraints of AI infrastructure will be viewed favorably.

    • Geopolitical Resilience: In light of the growing US-China rivalry in AI, companies should demonstrate resilience to geopolitical risks. This includes diversified revenue streams and supply chains and operations that are less susceptible to disruptions caused by export controls, sanctions, or other geopolitical tensions.

Partnering: For companies seeking to leverage AI capabilities without developing them in-house, strategic partnerships are essential.

  • Key Criteria for Partnerships:

    • Complementary Capabilities: Partners should offer complementary technologies, expertise, or market access that enhance the company's ability to capitalize on the AI opportunity.

    • Shared Vision: A shared understanding of the transformative potential of AI and a commitment to mutual success are crucial for a fruitful partnership.

    • Technological Compatibility: The technologies and data flows of partnering organizations should be seamlessly integrated to ensure smooth collaboration and efficient operation.

    • Trust and Transparency: Partnerships should be built on a foundation of trust and transparency, with clear agreements on data ownership, intellectual property rights, and ethical guidelines.

Aligning: For organizations across all sectors, aligning internal operations and strategies with the AI Supercycle is paramount for long-term competitiveness.

  • Key Actions for Alignment:

    • AI Readiness Assessment: Organizations should conduct a thorough assessment of their current infrastructure, data assets, and talent pool to identify areas where AI can be effectively adopted and to pinpoint any gaps that need to be addressed.

    • Use Case Identification: A focused effort should be made to identify specific business problems or processes that can be improved or transformed through the application of AI solutions.

    • Talent Development: Investing in training and development programs is essential to upskill the workforce and prepare them for an AI-driven future. This includes both technical skills in AI development and deployment, as well as broader skills related to data analysis and interpretation.

    • AI Governance: Organizations should develop and implement guidelines and principles to govern the development and deployment of AI within their operations.

    • Geopolitical Strategy: Given the increasing geopolitical complexities surrounding AI, organizations need a strategy for navigating the evolution of two separate AI ecosystems between the US and China.

Additional Tracking:

  • Revenant Research conduct monthly tracking of AI-related cases in both civil and criminal courts.

  • Revenant Research has comiled and tracks over 275 emerging AI regulations posed by governments and international bodies across the world.

Conclusion

The AI Supercycle represents a transformative force with far-reaching implications for businesses, governments, and society. This document provides a framework for understanding its phases, utilizing the AIEI and AIII indices to track progress, and making informed strategic decisions. By focusing on the foundational infrastructure, recognizing the critical role of energy, anticipating the rise of AI applications, and proactively navigating the evolving landscape of AI governance and geopolitics, stakeholders can effectively position themselves to thrive in this new era.

For consulting or custom reports contact me at nathan@revenantai.com