Latest AI News and Technology Developments – March 14, 2026
The State of Artificial Intelligence: Comprehensive Analysis of News and Technology Developments on March 14, 2026
Introduction: The Convergent Evolution of Agentic Systems and Global Compute Infrastructure
The landscape of artificial intelligence as of March 14, 2026, represents a definitive pivot from generative assistance to autonomous agency. The preceding 24 hours have seen a saturation of developments that underscore a broader transition within the global technology sector: the movement away from models that merely answer questions toward systems that execute complex, multi-step workflows across disparate software environments. This transition, however, is occurring against a backdrop of significant infrastructure instability and a deepening debate regarding the capital sustainability of the current "AI bubble".
Market analysts observing the latest ai news march 2026 are currently grappling with a paradox. While model capabilities continue to defy previous benchmarks, the physical and financial foundations required to sustain them are showing signs of stress. The collapse of major datacentre deals, such as the friction between OpenAI and Oracle regarding the Abilene, Texas facility, suggests that the "capex arms race" may be reaching a point of diminishing returns for traditional infrastructure providers. Concurrently, the release of high-fidelity, agent-centric models like GPT-5.4 and the Gemini 3.1 series has fundamentally altered the expectations of enterprise adopters, who now demand "digital coworkers" rather than simple chatbots.
The ai technology developments last 24 hours march 12-13 2026 highlight a strategic bifurcation between major labs. While OpenAI has leaned further into defense and government-aligned deployments, triggering significant consumer backlash, Anthropic has solidified its position as the "ethical alternative," launching the Claude Partner Network with a $100 million commitment to enterprise transparency and safety. This divergence is not merely ideological; it is shaping the procurement strategies of global agencies and the development of national AI sovereignty.
Metric | Status as of March 14, 2026 | Yearly Trend |
Global Datacentre Lease Volume | > $700 Billion | +340% increase |
Enterprise Agent Adoption Rate | 68.1% of organizations | Significant growth in SQL/DevOps |
AI-Native Startup Revenue | Scaling 5.5x (e.g., Claude Code) | Rapid acceleration |
Zero-Click Search Frequency | ~60% of all queries | Disrupting traditional SEO |
GPU Performance per Watt | 5x improvement (NVIDIA Rubin platform) | Generational leap |
As we dissect the ai news last 24 hours march 13 2026, it becomes clear that the focus has shifted from "can AI do this?" to "how much compute will it cost?" and "can we trust it to act independently?" The following analysis provides an exhaustive review of these themes, synthesizing the technical, economic, and geopolitical factors defining the current epoch of artificial intelligence.
Architectural Breakthroughs: GPT-5.4 and the Transition to Native Agency
The release of GPT-5.4 on March 5, 2026, marked a significant milestone in the evolution of large language models (LLMs). This model, which succeeded the incremental GPT-5.2 and 5.3 versions, was specifically designed to bridge the gap between text generation and software interaction. In the context of the latest ai news march 13 2026, GPT-5.4 is now being evaluated in production environments, where its "Thinking" and "Pro" modes are redefine professional workflows.
The "Thinking" mode utilizes a new deliberate reasoning architecture that allows the model to allocate additional compute time to internal deliberation before outputting a response. This mechanism is critical for complex tasks such as financial modeling, where logical errors can be catastrophic. Unlike the "Instant" models of the past, GPT-5.4 Thinking evaluates potential outcomes and multi-step dependencies, making it 33% less likely to contain false claims compared to previous iterations.
One of the most transformative features of GPT-5.4 is its native computer-use capability. While previous models required complex API integrations to interact with software, GPT-5.4 can interpret screenshots in real-time and simulate mouse and keyboard inputs to navigate standard desktop environments. This allows the model to perform tasks such as updating CRM records from email threads, generating accounting spreadsheets from scanned invoices, and even debugging code across multiple files within an IDE.
Feature | GPT-5.4 Thinking | GPT-5.4 Pro |
Primary Use Case | Logic, Planning, Error Reduction | High-Intensity Research, Data Synthesis |
Context Window | 1 Million Tokens | 1 Million Tokens |
Reasoning Depth | Sequential deliberative steps | Parallel knowledge retrieval |
Cost Efficiency | Higher (lower token waste) | Lower (premium research pricing) |
Latency | High (seconds/minutes) | Low (real-time stream) |
The context window expansion to 1 million tokens represents a qualitative shift in how AI processes information. Large organizations are now using GPT-5.4 to analyze entire codebases or 500-page legal contracts in a single session, eliminating the need for complex Retrieval-Augmented Generation (RAG) systems that often lose nuance when chunking data. In specialized benchmarks, GPT-5.4 scored 91% on the BigLaw Bench, demonstrating its ability to maintain high precision in complex legal document analysis.
However, the "Pro" version of GPT-5.4 is where the most intensive knowledge work resides. Designed for scientists, developers, and consultants, the Pro model integrates with advanced tool servers, reducing token usage for tool searches by up to 47% while maintaining high accuracy. This indicates that OpenAI is prioritizing efficiency alongside raw power, as the cost of running such massive models becomes a primary concern for enterprise clients.
The implications for the developer community are profound. As AI starts understanding code in context rather than snippets, the role of the software engineer is shifting toward system orchestration. GPT-5.4's improved performance in producing polished frontend interfaces and iterating through debugging cycles has led to reports that AI-native startups are operating with 5.5x the efficiency of traditional software teams.
The Global Compute Arms Race: Infrastructure, Investment, and the Bubble Concerns
As we examine the ai technology developments last 24 hours march 12-13 2026, the physical infrastructure supporting these software breakthroughs is under intense scrutiny. The "Stargate" project, a $500 billion infrastructure gamble aimed at securing American leadership in AI, is currently facing significant headwinds. Negotiations between OpenAI and Oracle regarding the expansion of a massive datacentre in Abilene, Texas, have reportedly stalled due to disagreements over financing and the speed of capacity delivery.
This friction is symptomatic of a broader "AI bubble" concern. Since the launch of ChatGPT three years ago, the global datacentre lease market has ballooned to over $700 billion, a 340% increase in just two years. However, the economic productivity gains promised by these investments have yet to fully manifest in national GDP figures. For instance, the UK reported zero GDP growth for January 2026, even as the government aggressively marketed Britain as an AI hub.
Financial analysts warn that the capital side of the AI economy is showing fissures. Many datacentre operators, such as Nscale, have secured billions in loans leveraged against their GPU inventories. Because graphics chips depreciate rapidly and hardware generations move at a "head-spinning velocity," lenders are taking on massive risk. If a newer chip architecture, like the upcoming NVIDIA Rubin, renders current Blackwell systems obsolete faster than anticipated, the underlying collateral for these loans could evaporate.
Infrastructure Risk Factor | Description | Impact Level |
Capital Concentration | Top 3 hyperscalers control > $500B in orders | High (Systemic Risk) |
Hardware Obsolescence | Rubin vs. Blackwell transition cycle | Medium (Asset Depreciation) |
Supply Chain Fragility | Helium supply risks due to Iran/Qatar crisis | Low (Logistical Delay) |
Environmental Regulation | Rising emissions costs in UK/Europe | Medium (Approval Delays) |
Sovereign AI Demands | Pressure to buy domestic vs. US hardware | Medium (Trade Tension) |
The UK's specific exposure to this bubble is noteworthy. Investigations have revealed that many "sovereign AI datacentre" projects, such as the one in Loughton, Essex, are significantly delayed or remain "scaffolding yards" months after their announced launch dates. The tension between the need for massive computing power and the rising environmental costs of datacentres has led some councils to approve projects despite emission warnings, while others have stalled them indefinitely.
Furthermore, geopolitical events are directly impacting the AI supply chain. Iranian drone strikes have recently affected helium supplies from Qatar, a critical component in advanced semiconductor manufacturing. As a result, the "Stargate" and other major infrastructure projects are not just financial gambles; they are subject to the same physical and political constraints as traditional industrial sectors.
Despite these cracks, the "Capex Arms Race" continues among the tech giants. Amazon recently shocked the market by announcing a planned $200 billion in capital expenditures for 2026, followed by Alphabet and Microsoft at $180 billion and $155 billion respectively. This massive influx of capital has created a backlog of orders for NVIDIA, but it has also invited intense scrutiny from regulators regarding alleged "loyalty penalties" used to deter customers from exploring rival hardware.
Competitive Paradigms: The Strategic Bifurcation of Anthropic and OpenAI
In the week leading up to March 14, 2026, a fundamental shift occurred in the competitive dynamics of the "Big Three" AI labs: OpenAI, Anthropic, and Google. The dominant story in the latest ai news march 13 2026 was the fallout from OpenAI’s deepening ties with the U.S. Department of War (formerly the Department of Defense). Following the announcement of a deal for classified military deployments in late February, OpenAI faced a massive "consumer backlash" termed the #QuitGPT movement.
Data from app intelligence platforms revealed that ChatGPT uninstalls surged by 295% day-over-day on February 28, while one-star reviews spiked by 775%. This created a vacuum that Anthropic was perfectly positioned to fill. Claude, Anthropic’s flagship model, saw its downloads jump by 51% over the same period, reaching #1 on the U.S. App Store for the first time. This shift underscores the growing importance of "vendor ethics" in the AI market, as users and enterprises alike evaluate the risks associated with military-aligned AI providers.
Anthropic has doubled down on its "corporate governance" brand. On March 12, 2026, the company launched the Claude Partner Network, committing $100 million to support enterprises in adopting Claude with transparency and technical support. Unlike OpenAI, Anthropic has refused to lift "guardrails" that prevent its models from being used for mass domestic surveillance or fully autonomous weapons targeting, leading to its being phased out of several federal agencies in favor of OpenAI and Google.
Company | Market Strategy | Key Move (March 2026) | User Sentiment |
OpenAI | Government/Defense Focus | $25B revenue; DoD/DoW deal | Polarized (#QuitGPT) |
Anthropic | Ethical/Enterprise Focus | $100M Partner Network; #1 App Store | Positive (Trust-led) |
Deep Reasoning/Ecosystem | Gemini 3.1 Pro "Deep Think" | Rebounding | |
Meta | Lagging/Open Source Shift | "Avocado" delay; Gemini licensing | Skeptical |
Google, meanwhile, has staged a powerful comeback with the Gemini 3.1 series. Released in mid-February and expanded in early March, Gemini 3.1 Pro utilizes a "Deep Think" architecture that emphasizes structured reasoning and deliberate problem-solving. On logic-heavy benchmarks like ARC-AGI-2, Gemini 3.1 Pro achieved a verified score of 77.1%, more than double the performance of the previous Gemini 3 Pro version. Google is effectively positioning Gemini as the "deeper" alternative, focusing on scientific methods and complex agentic tool use.
Meta remains the outlier in this high-stakes competition. Internal personnel upheaval and the repeated delay of the "Avocado" AI model have put Mark Zuckerberg’s AI division in a precarious position. Reports suggest that Meta is even considering licensing Google's Gemini to support its own products—a move that would signify a major strategic defeat for its independent model ambitions. This highlights the "Crocodile Mouth" effect in the industry: as the leading models (GPT-5.4, Gemini 3.1, Claude 4.6) pull ahead, the cost of staying relevant is bankrupting or marginalizing the second-tier players.
Semiconductor Horizons: NVIDIA Rubin and the 1.6nm Future
The hardware foundation for these software advancements is undergoing its most significant shift since the introduction of the Transformer architecture. As of the latest ai news march 14 2026, the industry is preparing for the transition from NVIDIA’s Blackwell architecture to the newly unveiled Rubin platform. Named after Vera Rubin, this platform represents a generational leap in computing power, driven by extreme codesign across six new chips.
The Rubin architecture is expected to be built on Taiwan Semiconductor Manufacturing Company’s (TSMC) advanced 1.6nm (A16) process, incorporating revolutionary silicon photonics to eliminate data bottlenecks. The platform features the Vera CPU—a custom ARM-based processor with 88 "Olympus" cores designed specifically for agentic reasoning—and the Rubin GPU, which includes eight stacks of HBM4 memory for a total of 384 GB.
Platform Component | Technical Advancement | Performance Gain over Blackwell |
Rubin GPU | HBM4 (384 GB) | 10x lower inference cost for MoE |
Vera CPU | 88 custom Olympus cores | Native agentic reasoning support |
NVLink 6 Switch | 3.6 TB/s bandwidth per GPU | Generational bandwidth leap |
BlueField-4 DPU | Agentic telemetry architecture | Faster data enrichment |
HGX Rubin System | Liquid-cooled rack-scale | 4x fewer GPUs required for training |
According to NVIDIA CEO Jensen Huang, the Rubin platform delivers up to 10 times lower cost per token for Mixture-of-Experts (MoE) model inference compared to the Blackwell platform. This is a critical development for the sustainability of the AI economy, as the astronomical cost of inference has been a primary concern for startups and enterprises. Furthermore, MoE models can now be trained with 4 times fewer GPUs, potentially slowing the frantic construction of energy-hungry supercomputers.
The first Rubin systems are slated for release in the second half of 2026 via AWS, Microsoft, and Google. Microsoft is already integrating these rack-scale systems into its next-generation "Fairwater" AI superfactories. This hardware surge is not limited to data centers; "Edge-centric AI" is also rising, with specialized ASICs and accelerators allowing lightweight models to deliver real-time insights closer to where data is generated.
However, the hardware market is not without its controversies. The U.S. Department of Justice (DOJ) has recently escalated its investigation into NVIDIA’s market dominance, issuing subpoenas regarding alleged "loyalty penalties" used to deter customers from using rival hardware from startups or companies like Groq and Cerebras. As AI chips become the "new oil," the battle over semiconductor sovereignty is intensifying, with countries being warned that "AI chips will remain in America" unless foreign governments invest significantly in US-based manufacturing.
The Benchmark Revolution: Assessing Intelligence through Humanity's Last Exam
For years, the artificial intelligence industry relied on benchmarks like MMLU to measure progress. However, as the latest ai news march 13 2026 confirms, these tests are no longer sufficient. Models like GPT-4 and Gemini 1.5 began "acing" these tests through pattern matching rather than genuine understanding, creating a "saturation" effect that blinded researchers to the real limits of machine intelligence.
In response, an international team of nearly 1,000 experts from diverse academic backgrounds—including historians, physicists, and medical researchers—created "Humanity's Last Exam" (HLE). Launched in early 2026, HLE consists of 2,500 expert-level questions designed to be impossible to solve via internet search or simple retrieval. The questions range from identifying microscopic anatomical structures in birds to translating ancient Palmyrene inscriptions and analyzing phonological details in Biblical Hebrew.
The results of HLE have been "brutal" for even the most advanced systems. In initial trials, GPT-4o scored a meager 2.7% accuracy, while Claude 3.5 Sonnet achieved 4.1%. Even OpenAI's reasoning-focused "o1" model only reached 8%. The most capable systems currently available, Gemini 3.1 Pro and Claude Opus 4.6, have pushed accuracy into the 40-50% range, but a vast gap remains between machine output and true human expertise.
AI Model | HLE Score (%) | Key Failing |
GPT-4o | 2.7% | Shallow reasoning and retrieval reliance |
Claude 3.5 Sonnet | 4.1% | Lack of specialized domain knowledge |
OpenAI o1 | 8.0% | Reasoning steps insufficient for expert tasks |
Gemini 3.1 Pro | 42.0% | Best currently, but struggles with synthesis |
Claude Opus 4.6 | 45.0% | Strongest in humanities/specialized math |
These findings suggest that while AI is excellent at scanning databases and spotting patterns, it fails when ambiguity or implicit reasoning enters the picture. Human specialists draw on layers of context, intuition, and interconnected concepts built up through years of lived experience—elements that are not yet replicable in current transformer architectures. As one researcher from Texas A&M University noted, HLE serves as a reminder that "intelligence isn't just about pattern recognition—it's about depth, context, and specialized expertise".
This benchmark is particularly critical for policymakers and medical professionals who need a realistic understanding of what AI can and cannot do. Without accurate assessment tools, organizations risk misinterpreting machine output as genuine intelligence, leading to "serious ethical risks" in high-stakes environments like healthcare.
Institutional and Geopolitical AI: Defense, Policy, and National Sovereignty
The intersection of AI and national security reached a boiling point on March 13, 2026, with the ongoing controversy surrounding the use of AI in military systems. The "Anthropic-Pentagon battle" has become a case study in the struggle over AI ethics. While companies like OpenAI and xAI (Grok) have actively positioned themselves to secure massive defense contracts, Anthropic’s refusal to lift its constitutional guardrails for "fully autonomous weapons" has led to its being sidelined by federal agencies.
This strategic divergence has global implications. In Africa, experts are warning about "invasive AI-led mass surveillance" violating freedoms, often powered by technologies exported from the global north. Simultaneously, the rise of AI-generated images of conflict in Iran has made it increasingly difficult to discern reality from propaganda, prompting calls for tighter regulation of generative media during geopolitical crises.
National sovereignty in the AI era is becoming a matter of compute ownership. The UK government’s urge for the NHS and Ministry of Defence to "buy British tech" is part of a broader trend where nations are attempting to build "Sovereign AI" to reduce dependence on US-based hyperscalers. However, as the Guardian’s investigation into delayed UK datacentres shows, the gap between political rhetoric and infrastructural reality remains wide.
Policy Area | Key Development (March 2026) | Regional Impact |
Defense Contracts | OpenAI/DoW deal; #QuitGPT movement | United States |
Mass Surveillance | AI monitoring in African nations | Sub-Saharan Africa |
Sovereignty | UK "Buy British" mandate for NHS/MoD | United Kingdom |
Regulation | Calls for tighter rules on AI toys for children | International |
Copyright | Pro-author changes to copyright laws | United Kingdom |
In the regulatory sphere, researchers are now focusing on the vulnerability of young children to "AI toys," arguing that these devices must be more tightly regulated to prevent psychological manipulation or data privacy breaches. Furthermore, the UK is considering changes to copyright laws to protect authors against unauthorized training by Big Tech models, reflecting a growing push to rebalance the relationship between content creators and AI companies.
Ethics in medicine also remains a primary concern. In the Akron area, experts have voiced fears that AI algorithms are being integrated into hospitals too quickly and without proper vetting. Potential biases in training data could lead to harmful outcomes for minority populations, and the lack of a clear "risk-benefit profile" for AI-led triage has led ethicists to call for deliberate, methodical implementation policies.
Macroeconomic Shifts: Labor Markets and the Productivity Gap
The macroeconomic impact of AI is becoming more tangible as we move through March 2026. A survey of 1,000 U.S. full-time workers released by Novorésumé on March 13 shows that AI has "infiltrated virtually every workplace". While 47% of users say the technology helps them complete tasks faster, the study revealed a "productivity paradox": many workers are spending the saved time on personal activities while still on the clock, rather than increasing their professional output.
Generational AI Reliance | Usage Rate (%) | Key Behavior |
Millennials | 55% | Use AI to finish work faster (Highest) |
Gen Z | 49% | High adoption, low guilt |
Gen X | 40% | Moderate adoption |
Baby Boomers | 36% | Lowest adoption, most skeptical |
Despite this, the trend of corporate layoffs driven by AI-led restructuring is accelerating. On March 11, 2026, Atlassian announced it was laying off 10% of its workforce (1,600 workers) as it pivots toward AI and enterprise sales. Several other companies have explained that new AI tools allow "smaller teams to perform work that previously required larger departments," essentially turning AI into an "efficiency multiplier" for investors.
In Australia, the debate centers on whether AI is taking jobs or simply being used as an excuse for corporate restructuring. While 58% of workers feel their jobs are safe from AI replacement, economists warn that AI efficiency may gradually become the "new corporate baseline," leading to adjusted pay scales or reduced hours for roles that are increasingly automated.
The cost of intelligence is also dropping. Cheaper models like Gemini 3.1 Flash-Lite (priced at $0.25 per million input tokens) are allowing freelancers and independent creators to automate research, marketing, and operations. This has led to the rise of "AI-native startups"—companies built entirely around AI systems from day one. These startups often operate with very small teams but generate significant revenue by leveraging agentic tools to perform tasks that once required dedicated staff.
Industry-Specific Transformations: Healthcare, Supply Chains, and Retail
Beyond the tech sector, AI is reshaping traditional industries with varying degrees of success. In the fourth China International Supply Chain Expo (CISCE), the introduction of an "Artificial Intelligence Zone" highlighted the modernization of Advanced Manufacturing and Smart Vehicles through AI-enabled robotics. New players in the "low-altitude aviation" (drone) sector are using AI for autonomous navigation, reflecting a broader trend of intelligence moving into the physical world.
In retail and consumer devices, Samsung is reportedly seeking "AI allies" to compete with Apple’s ecosystem. Samsung has already embedded Google's Gemini models into its devices and is exploring further partnerships with OpenAI and Perplexity AI to differentiate its Galaxy devices through superior AI features, such as voice assistants capable of reserving services without user interaction.
Industry | AI Development (March 2026) | Key Innovation |
Healthcare | AI Analyst for analytics; Brain MRI flagging | Seconds to interpret MRIs |
Logistics | AI Zone at CISCE; Drone aviation | AI-enabled robotics at scale |
Finance | Automated back-office workflows | AI digital workers for finance/HR |
Retail | Samsung/Perplexity alliance | Voice assistants for service booking |
Energy | Real-time optimization of production | Smart manufacturing "AI Factory" |
The energy sector is also seeing the rise of "AI-Ready Data Infrastructure". Companies like Prolifics are building foundations for enterprise-scale AI, focusing on real-time optimization of production and quality in manufacturing. However, the environmental cost of the datacentres required to power these systems remains a major point of contention, with some researchers asking if it is "time to quit AI" due to its carbon footprint.
In the scientific community, AI is becoming part of the "scientific method itself". Researchers at NBI have built real-time monitoring systems that track qubit fluctuations in quantum computers, while others have found that brain-inspired "neuromorphic" computers are better at solving physics equations than traditional supercomputers. These breakthroughs suggest that the next leap in computing will come from the fusion of AI and quantum systems, potentially solving problems that once took years in just a few days.
The Evolution of Digital Discovery: Discovery Engine Optimization in 2026
The way people find information has changed fundamentally in 2026. Traditional search engine optimization (SEO) is being replaced by "Discovery Engine Optimization" (DEO). With zero-click searches accounting for nearly 60% of all Google queries, the goal is no longer just to rank, but to be "the answer" synthesized and cited by the AI.
AI platforms like Perplexity (with its Comet browser) and Google’s Gemini prioritize "answer-ready" content that is structured for extraction. Long, conversational queries (averaging 23 words on AI platforms compared to 3-4 words on traditional Google) have made keyword targeting secondary to "contextual intent".
DEO Strategy | Description | Actionable Step |
Contextual Extraction | Structure content for AI scrapers/bots | Use H2/H3 for direct questions |
Topical Authority | Dominate subtopics, not just keywords | Create "Question-Chain" content |
Citations & Mentions | Increase brand presence on Reddit/LinkedIn | Encourage authentic user reviews |
Entity Definition | Clarify brand identity for LLMs | Implement granular Schema markup |
Multi-Platform Presence | Optimize for YouTube, TikTok, and Reddit | Diversify content across 3-4 platforms |
Marketers in 2026 must optimize for "citation frequency" within AI-generated summaries. Since AI cites Reddit often (up to 30% of the time) because it craves authentic human perspective, brand mentions in community discussions have become as valuable as high-quality backlinks once were. Furthermore, "AI Overviews" now appear for commercial and transactional queries, surfacing brands that may not even appear in the top organic results if their content is more "cite-able".
Technical SEO now includes a deeper focus on "agent-optimized queries" and ensuring that business information is machine-readable so AI agents can check real-time availability and book appointments on behalf of users. As one expert noted, "The days of 'just rank on Google' are over; SEO in 2026 rewards brands that understand why users search, not just what they search".
Strategic Recommendations and Future Outlook
The AI ecosystem as of March 14, 2026, is defined by a paradoxical combination of unprecedented cognitive capability and significant infrastructural fragility. The emergence of agentic systems like GPT-5.4 has moved the industry from the "chatbot era" to the "coworker era," but the sustainability of this model depends on a massive, high-risk build-out of physical compute power.
For enterprise leaders, the path forward involves shifting investment from general-purpose AI toward domain-specific, secure agentic workflows. As shown by "Humanity's Last Exam," the most successful applications of AI will be those that augment human expertise in context-rich environments rather than attempting to replace it entirely. In the marketing sphere, the transition to Discovery Engine Optimization is essential; brands that fail to become "cite-able sources" for AI intermediaries will likely disappear from the consumer discovery funnel.
As the industry looks toward the Rubin era and the potential integration of quantum computing, the focus must remain on "Sovereignty, Security, and Sovereignty." Those who can balance the raw power of these systems with robust ethical frameworks and reliable physical infrastructure will define the next decade of digital leadership.
Discussion
No comments yet. Be the first to share your thoughts.
Leave a Comment
Your email is never displayed. Max 3 comments per 5 minutes.