I Watched Anthropic Launch a Services Company and Felt the Ground Shift Under Indian IT
Table of Contents▼
The Announcement That Changed the Framing
Anthropic partnered with Blackstone, Hellman and Friedman, and Goldman Sachs to form an independent enterprise AI services company. The mandate is specific: place Applied AI engineers directly inside mid-sized businesses across healthcare, finance, manufacturing, retail, and real estate, and embed Claude into their core operations. Applied AI engineers, to define the term, are engineers whose primary job is taking AI models and integrating them into real business systems, not building the models themselves.
Anthropic CFO Krishna Rao described it plainly: enterprise demand for Claude is outpacing any single delivery model they had before. That sentence tells you everything. The bottleneck is deployment, not the model. So Anthropic built a company to own that bottleneck.
This is the moment where Anthropic stopped being a research organization that sold API access and became a company that deploys people inside your business to transform how it runs.
What Infosys, TCS, and Wipro Actually Do
To understand why this announcement matters for Indian IT, you need to understand the business model these companies built.
Infosys, TCS, and Wipro are IT services giants. Their core business is deploying large pools of skilled engineers inside enterprise clients worldwide to build, maintain, modernize, and manage technology systems. A bank in Germany hires TCS to manage its core banking infrastructure. A healthcare company in the US hires Infosys to migrate its data systems to the cloud. A manufacturing group in France hires Wipro to redesign its supply chain software. The engineers live close to the client, understand the client's systems deeply, and bill by the engagement or by the hour.
This model generates enormous revenue. TCS alone crossed $29 billion in annual revenue in fiscal 2025. The model works because technology transformation inside large organizations is complex, slow, and requires sustained human presence. Indian IT companies became the dominant force in that space over thirty years of deliberate execution.
Anthropic's new company does the same thing. It deploys engineers inside enterprises to transform how those businesses operate. The difference is the engineers arrive paired with frontier AI that can replace or accelerate large portions of the work that previously required many more people.
The Infosys Irony
In February 2026, Infosys announced a partnership with Anthropic to integrate Claude into its Topaz AI platform. Topaz is Infosys's internal AI services layer, the platform through which it delivers AI-augmented services to its clients. The idea was to offer enterprise clients access to Claude's capabilities through Infosys's existing relationships and delivery infrastructure.
Three months later, Anthropic formed a company to go directly to those enterprise clients themselves.
The partnership model assumed Anthropic would remain upstream: building the model, selling access, and letting services companies like Infosys handle the client relationship. Anthropic decided the client relationship is where the value actually lives. So they moved downstream and took it.
Axis Securities flagged this as a near-term threat to large-cap IT companies, with real pressure expected on contract renegotiation. The pressure is structural. If a mid-sized healthcare company can hire Anthropic's new company to embed AI into its operations directly, with engineers who arrive already fluent in Claude's capabilities, the value proposition of the traditional IT services engagement weakens significantly.
This Shift Has Been Building
It would be wrong to frame this as a sudden disruption. The ground has been moving under Indian IT for two years.
Entry-level hiring at major IT firms has been contracting. Infosys hired roughly 50,000 freshers in fiscal 2022 and that number dropped sharply in 2024 and 2025 as automation absorbed work that previously required large headcounts. The bench model, where IT companies maintain large pools of trained employees ready to deploy on new projects, becomes harder to justify when AI can handle the routine work those bench engineers used to do.
What Anthropic's announcement does is accelerate and formalize that pressure. It signals to the market that a well-capitalized, frontier AI company with $1.5 billion behind it is now competing for the same enterprise transformation contracts that Indian IT has dominated. That changes the pricing dynamic, the talent dynamic, and the sales conversation.
Where the Work Is Actually Going
The prediction of mass job elimination is a lazy one and it misses what is actually happening. The industry is restructuring, and specific categories of work are growing fast while others compress.
AI integration engineering is the clearest growth area. This is the work of connecting large language models to the existing systems that businesses already run. ERP systems, which stands for enterprise resource planning software, are the platforms that large companies use to manage finance, supply chain, human resources, and operations in one place. SAP and Oracle are the dominant vendors here. These systems have been running inside large enterprises for twenty to thirty years. They store enormous amounts of operational data and they run business-critical processes that the organization cannot afford to break.
Connecting an LLM to an ERP system requires understanding both layers. You need to know how the model handles context, how it fails when the context is incomplete, and how to build guardrails around it. You also need to know how the ERP system structures its data, what its APIs look like, and what happens operationally if the integration produces a wrong output. That combination of skills is rare right now and the demand for it is significant.
Prompt and workflow design is another category that is growing. This is the work of redesigning business processes around AI capability. A procurement workflow that previously required a human to review vendor invoices, cross-reference them against purchase orders, flag anomalies, and escalate exceptions can now be redesigned so an AI model handles the first three steps and a human reviews only the flagged exceptions. Designing that new workflow well requires understanding the model's actual capabilities and failure modes, the business logic that governs the process, and what the cost of an error is in that specific context. Getting it wrong is expensive. Getting it right compounds over time as the process runs thousands of times per month.
The Governance Layer Is Arriving
LinkedIn lists over 53,000 AI governance related job openings worldwide right now. India shows over 8,000 vacancies specifically. These numbers are growing because regulation is arriving in multiple jurisdictions simultaneously, and organizations are not prepared for it.
AI governance is the set of policies, processes, and oversight structures that determine how AI systems are built, deployed, audited, and corrected inside an organization. It covers questions like: who is accountable when the model makes a consequential error, how do you document model behavior to satisfy a regulator's audit, how do you monitor a model in production so you catch drift before it causes real damage, and how do you build the internal processes that keep AI usage inside legal boundaries as those boundaries change.
The EU AI Act is the most significant regulatory document in this space right now. It is 458 pages long. It classifies AI systems by risk level, where high-risk systems used in healthcare, credit scoring, hiring, and critical infrastructure face strict requirements around documentation, human oversight, and incident reporting. Organizations operating in Europe need people who understand how to map their AI systems to those risk classifications, build the documentation pipelines that regulators require, and respond correctly when an incident triggers a reporting obligation. That work requires both technical understanding and regulatory literacy simultaneously.
The NIST AI Risk Management Framework is the US equivalent. It provides a structured approach for organizations to identify, assess, and manage AI risk across the model lifecycle. Model lifecycle means the full arc of an AI system's existence: data collection, training, validation, deployment, monitoring, and eventual retirement. Understanding how risk accumulates and manifests at each stage is the foundation of governance work.
ISO/IEC 42001 is the international standard for AI management systems. Organizations seeking certification under this standard need to build internal governance structures that satisfy its requirements around transparency, accountability, and continuous improvement. Certification is increasingly becoming a requirement in enterprise procurement conversations, which means the demand for people who can build toward it is real and growing.
The Skills That Actually Matter
The World Economic Forum specifically identifies technical translation as the scarcest skill in AI governance: the ability to read a technical AI research paper and convert its findings into policy language that a regulator or board member can act on. Most engineers can write the technical analysis. Most lawyers and policy professionals can write the regulatory brief. The people who can do both, who understand what a model's attention mechanism actually does and can explain why that creates a specific regulatory risk in plain language, are genuinely rare.
On the technical side, you need working knowledge of how models are trained, validated, and monitored. Bias detection and fairness auditing are specific competencies here. Tools like LIME and SHAP are used to interpret model outputs, to understand why a model made a specific prediction, and to evaluate whether the model's behavior is consistent across different demographic groups. LIME stands for Local Interpretable Model-agnostic Explanations and SHAP stands for Shapley Additive Explanations. Both are techniques for making model decisions explainable to humans, which is a regulatory requirement in high-risk AI contexts.
Data governance is foundational. This means understanding data lineage, which is the record of where data came from and how it was transformed before it reached the model, and data provenance, which is the documentation of a dataset's origin and the process by which it was collected. Regulators increasingly require this documentation. Tools like Collibra and Apache Atlas are used to build and maintain these records at scale.
Model monitoring is the ongoing work of tracking how a deployed model's performance changes over time. Models experience drift, meaning their accuracy and reliability degrade as the real-world data they encounter diverges from the data they were trained on. Catching drift early, before it causes consequential errors at scale, requires monitoring infrastructure and clear escalation processes. MLflow is one tool used to track model performance metrics in production.
India's Specific Opportunity
India's own regulatory framework, the Digital Personal Data Protection Act commonly referred to as the DPDPA, is still being operationalized. The rules governing how it applies to AI systems are still forming. That is a real opportunity for professionals who build expertise in it now.
Multinationals operating in India face a compliance challenge that is specific to them: they need to satisfy both Indian data protection requirements and global frameworks like the EU AI Act simultaneously, because they have operations and customers across jurisdictions. The requirements overlap in places and conflict in others. The people who understand how to navigate that dual compliance requirement are scarce and the organizations that need them are willing to pay for the expertise.
India currently has close to 300 open roles specifically for Head of AI Governance. That number reflects where the institutional demand is consolidating: organizations want senior people who can own this function, build internal policy, and represent the company in regulatory conversations. Building toward that profile now, while the function is still forming and the competition for those roles is lower than it will be in three years, is a compounding advantage.
What I Think Is Actually Happening
The gap between AI research and business operations is compressing fast. Anthropic building a services company is the clearest signal yet that the people who own the deployment layer will capture the most value from this transition. That changes the career logic for everyone in tech.
The roles that will matter are the ones that live in the space between the model and the business: people who understand how the model behaves, understand the system it connects to, understand the regulation it has to comply with, and can coordinate across the engineering team, the legal team, and the executive team to make good decisions about all three simultaneously.
That profile takes time to build. It requires genuine technical depth, regulatory literacy, and the communication skills to hold the room when the conversation gets complicated. Most people build one of those three well. The few who build all three will have more leverage in this market than almost any other technical profile I can think of.
The ground shifted on May 4, 2026. The question is what you build on it from here.