ContentSproute

us-technology

Reddit should be a ‘go-to search engine,’ Steve Huffman says

Reddit is going to be leaning even harder into search in the coming months. The company has already been working on a plan to integrate its LLM-powered search into its main search feature, but CEO Steve Huffman said he wants users to think of the site as an actual search engine. During the company’s latest earnings call, Huffman said search is one of the top priorities for Reddit. “We’re concentrating our resources on the areas that will drive results for our most pressing needs, improving the core product, making Reddit a go-to search engine, and expanding internationally.” The idea of reddit as a search engine isn’t that far-fetched. Many people are already in the habit of adding “Reddit” to traditional searches in the hopes of finding relevant threads from the site. And the company has been trying to take advantage of this with its own AI-powered search product Reddit Answers. Though that feature is still labeled as being in “beta,” the company plans to eventually add it to its default search bar. “Our focus right now is on unifying the Reddit search, like traditional search on Reddit, which is very widely used on Reddit, and the new Reddit answers product … we’re unifying those into a single search experience, and we’re going to bring that front and center in the app,” Huffman said. Huffman’s comments come at a time when AI is increasingly eating search traffic for websites. It sounds like even Reddit, which has a multimillion-dollar data licensing deal with Google, isn’t immune from those trends either. During the call, Huffman said that Reddit’s search traffic from Google “varies week to week,” but that overall “it was a headwind” during the last quarter. That may help explain why Huffman is so eager to make Reddit itself a search destination, even as the company continues to license its data to AI companies. “AI doesn’t invent knowledge,” he said. “It learns from us; from real people, sharing real perspectives.” Read More

Reddit should be a ‘go-to search engine,’ Steve Huffman says Read More »

Apple is ‘open to’ acquisitions to boost its AI roadmap

Apple leadership discussed results and updates today in its third-quarter conference call, including some statements about its AI endeavors. As reported by CNBC, CEO Tim Cook said that the company is “significantly growing out investments” in artificial intelligence, which shouldn’t be much of a surprise for any players in the tech space. However, Cook did acknowledge that an acquisition to boost its work in AI wasn’t out of the question. “We’re open to M&A that accelerates our roadmap,” he said. Cook said that Apple is “not stuck on a certain size company” as a possible target for an AI-related purchase. He noted that Apple has acquired “about” seven businesses so far this year across multiple disciplines but that none were “huge in terms of dollar amount.” The company also has been pretty quiet on its promised plans to overhaul the Siri voice assistant with more AI features. The news is still sparse on that subject; according to Reuters, Cook simply stated that the team is “making good progress on a personalized Siri.” Despite hopes that Siri improvements would be unveiled at WWDC 2025, the latest projections are that the AI-powered update to that service might not be ready until spring 2026. Apple did announce a few Apple Intelligence iterations at WWDC, but the general consensus is that the company’s AI efforts have been flagging behind other big tech businesses. That has led to speculation that it may look externally to improve its standing in the race to build the best AI features. Most recently, some execs within Apple have allegedly been eyeing up Perplexity as a potential acquisition. Read More

Apple is ‘open to’ acquisitions to boost its AI roadmap Read More »

Battlefield 6 gets an October 10 release date

Fall is often first-person shooter season, and looks like this year’s release calendar will include the next entry in the Battlefield series. Battlefield 6 is launching on October 10, and will be available to play on PlayStation 5, Xbox Series X/S and PC via Steam, Epic Games and the EA Play app. The previous trailer for the first-person shooter only showed content from the game’s single-player campaign. While there have been some solid stories in the Battlefield franchise, the main draw for many fans is the sprawling multiplayer matches, which were the focus of today’s new trailer and livestreamed event. The signature Conquest, Rush and Breakthrough modes will return in Battlefield 6, as well as typical FPS fare such as Team Deathmatch, Squad Deathmatch, Domination and King of the Hill. The new game mode coming this fall is called Escalation, where teams will face off to control and hold several capture points. On the map front, there are new locations in Egypt, Gibraltar, Tajikistan and Brooklyn, plus at least one familiar one: a remake of Operation Firestorm from Battlefield 3. There will be four familiar classes for players to choose from: Assault, Support, Recon and Engineer. Other tweaks showcased in the multiplayer content unveiled today include a new Drag and Revive option, where downed teammates can be lugged to a safer spot before you rez them, and an option for wall-mounting weapons for less recoil. There will also be plenty of opportunities for high-tech environmental destruction between the tanks, rocket launchers, aerial assaults and drone-mounted explosives. Or you can keep it simple and smash stuff with a really big hammer. If you can’t wait until October 10 to get into the combat, Battlefield 6 will have two open beta weekends on August 9-10 and August 14-17. It’s encouraging for fans to see some solid news about the upcoming game after an investigation by Ars Technica surfaced some concerning problems with its development and with AAA gaming at large. Read More

Battlefield 6 gets an October 10 release date Read More »

Google lost its antitrust case with Epic again

Google’s attempt to appeal the decision in Epic v. Google has failed. In a newly released opinion, the Ninth Circuit Court of Appeals has decided to uphold the original Epic v. Google lawsuit that found that Google’s Play Store and payment systems are monopolies. The decision means that Google will have to abide by the remedies of the original lawsuit, which limits the company’s ability to pay phone makers to preinstall the Play Store, prevents it from requiring developers to use its payment systems and forces it to open up Android to third-party app stores. Not only will Google have to allow third-party app stores to be downloaded from the Play Store, but it also has to give those app stores “catalog access” to all the apps currently in the Play Store so they can have a competitive offering. In October 2024, Google won an administrative stay that put a pause on some of those restrictions pending the results of this Ninth Circuit case. “The stay motion on appeal is denied as moot in light of our decision,” Judge M. Margaret McKeown, who oversaw the case, writes. “This decision will significantly harm user safety, limit choice, and undermine the innovation that has always been central to the Android ecosystem,” Lee-Anne Mullholand, Google’s Global Head of Regulatory Affairs, told Engadget. “Our top priority remains protecting our users, developers and partners, and maintaining a secure platform as we continue our appeal.” Google intends to appeal the Ninth Circuit’s decision to the Supreme Court. The origin of the Epic v. Google lawsuit was Epic’s decision to circumvent Google’s payment system via a software update to Fortnite. When Google caught wind, it removed Fortnite from the Play Store and Epic sued. Epic pulled a similar gambit with Apple and the App Store, though was far less successful in winning concessions in that case — its major judicial success there has been preventing Apple from collecting fees from developers on purchases made using third-party payment systems. Read More

Google lost its antitrust case with Epic again Read More »

Uber Eats is stuffing AI slop into your meal

Uber Eats has added a slate of AI features designed to, theoretically, help merchants earn new customers and ease the shopping experience for users. AI-enhanced food images are intended to make dishes more appealing by improving photos uploads. In the press release, Uber Eats shows an example in which pictures captured very close to the food are transformed into wider field-of-view shots of plated dishes. Because the tool creates portions of the image that were not there before, its accuracy remains to be seen. Menu descriptions will also get the AI treatment with the idea being to ensure their accuracy so that customers feel more confident in what they’re ordering. It will also summarize restaurant reviews, with the goal of highlighting areas for improvement as well as strengths. Much like the generated images, the jury is still out on if these tools will be useful, or another vector for hallucinations. Uber is inviting users to send photos of delivered food for items that lack menu images. Customers in the United States, Canada, Mexico and the United Kingdom can earn $3 in Uber Cash for their pictures. “Live order chat” will finally allow merchants to initiate conversations directly with customers once an order has been placed. Read More

Uber Eats is stuffing AI slop into your meal Read More »

OpenAI removes ChatGPT feature after private conversations leak to Google search

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI made a rare about-face Thursday, abruptly discontinuing a feature that allowed ChatGPT users to make their conversations discoverable through Google and other search engines. The decision came within hours of widespread social media criticism and represents a striking example of how quickly privacy concerns can derail even well-intentioned AI experiments. The feature, which OpenAI described as a “short-lived experiment,” required users to actively opt in by sharing a chat and then checking a box to make it searchable. Yet the rapid reversal underscores a fundamental challenge facing AI companies: balancing the potential benefits of shared knowledge with the very real risks of unintended data exposure. The controversy erupted when users discovered they could search Google using the query “site:chatgpt.com/share” to find thousands of strangers’ conversations with the AI assistant. What emerged painted an intimate portrait of how people interact with artificial intelligence — from mundane requests for bathroom renovation advice to deeply personal health questions and professionally sensitive resume rewrites. (Given the personal nature of these conversations, which often contained users’ names, locations, and private circumstances, VentureBeat is not linking to or detailing specific exchanges.) “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,” OpenAI’s security team explained on X, acknowledging that the guardrails weren’t sufficient to prevent misuse. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF The incident reveals a critical blind spot in how AI companies approach user experience design. While technical safeguards existed — the feature was opt-in and required multiple clicks to activate — the human element proved problematic. Users either didn’t fully understand the implications of making their chats searchable or simply overlooked the privacy ramifications in their enthusiasm to share helpful exchanges. As one security expert noted on X: “The friction for sharing potential private information should be greater than a checkbox or not exist at all.” Good call for taking it off quickly and expected. If we want AI to be accessible we have to count that most users never read what they click. The friction for sharing potential private information should be greater than a checkbox or not exist at all. https://t.co/REmHd1AAXY — wavefnx (@wavefnx) July 31, 2025 OpenAI’s misstep follows a troubling pattern in the AI industry. In September 2023, Google faced similar criticism when its Bard AI conversations began appearing in search results, prompting the company to implement blocking measures. Meta encountered comparable issues when some users of Meta AI inadvertently posted private chats to public feeds, despite warnings about the change in privacy status. These incidents illuminate a broader challenge: AI companies are moving rapidly to innovate and differentiate their products, sometimes at the expense of robust privacy protections. The pressure to ship new features and maintain competitive advantage can overshadow careful consideration of potential misuse scenarios. For enterprise decision makers, this pattern should raise serious questions about vendor due diligence. If consumer-facing AI products struggle with basic privacy controls, what does this mean for business applications handling sensitive corporate data? What businesses need to know about AI chatbot privacy risks The searchable ChatGPT controversy carries particular significance for business users who increasingly rely on AI assistants for everything from strategic planning to competitive analysis. While OpenAI maintains that enterprise and team accounts have different privacy protections, the consumer product fumble highlights the importance of understanding exactly how AI vendors handle data sharing and retention. Smart enterprises should demand clear answers about data governance from their AI providers. Key questions include: Under what circumstances might conversations be accessible to third parties? What controls exist to prevent accidental exposure? How quickly can companies respond to privacy incidents? The incident also demonstrates the viral nature of privacy breaches in the age of social media. Within hours of the initial discovery, the story had spread across X.com (formerly Twitter), Reddit, and major technology publications, amplifying reputational damage and forcing OpenAI’s hand. The innovation dilemma: Building useful AI features without compromising user privacy OpenAI’s vision for the searchable chat feature wasn’t inherently flawed. The ability to discover useful AI conversations could genuinely help users find solutions to common problems, similar to how Stack Overflow has become an invaluable resource for programmers. The concept of building a searchable knowledge base from AI interactions has merit. However, the execution revealed a fundamental tension in AI development. Companies want to harness the collective intelligence generated through user interactions while protecting individual privacy. Finding the right balance requires more sophisticated approaches than simple opt-in checkboxes. One user on X captured the complexity: “Don’t reduce functionality because people can’t read. The default are good and safe, you should have stood your ground.” But others disagreed, with one noting that “the contents of chatgpt often are more sensitive than a bank account.” As product development expert Jeffrey Emanuel suggested on X: “Definitely should do a post-mortem on this and change the approach going forward to ask ‘how bad would it be if the dumbest 20% of the population were to misunderstand and misuse this feature?’ and plan accordingly.” Definitely should do a post-mortem on this and change the approach going forward to ask “how bad would it be if the dumbest 20% of the population were to misunderstand and misuse this feature?” and plan accordingly. — Jeffrey Emanuel (@doodlestein) July 31, 2025 Essential privacy controls every AI company should implement The ChatGPT searchability debacle offers several important lessons for both AI companies and their enterprise customers. First, default privacy settings matter enormously. Features that could expose sensitive information should require explicit, informed

OpenAI removes ChatGPT feature after private conversations leak to Google search Read More »

Hard-won vibe coding insights: Mailchimp’s 40% speed gain came with governance price

Intuit Mailchimp provides email marketing and automation capabilities. It’s part of the larger Intuit organization, which has been on a steady journey with gen AI over the last several years, rolling out its own GenOS and agentic AI capabilities across its business units. While the company has its own AI capabilities, Mailchimp has found a need in some cases to use vibe coding tools. It all started, as many things do, with trying to hit a very tight timeline. Mailchimp needed to demonstrate a complex customer workflow to stakeholders immediately. Traditional design tools like Figma couldn’t deliver the working prototype they needed. Some Mailchimp engineers had already been quietly experimenting with AI coding tools. When the deadline pressure hit, they decided to test these tools on a real business challenge. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF “We actually had a very interesting situation where we needed to prototype some stuff for our stakeholders, almost on an immediate basis, it was a pretty complex workflow that we needed to prototype,” Shivang Shah, Chief Architect at Intuit Mailchimp told VentureBeat.  The Mailchimp engineers used vibe coding tools and were surprised by the results. “Something like this would probably take us days to do,” Shah said. ” We were able to kind of do it in a couple of hours, which was very, very interesting. That prototype session sparked Mailchimp’s broader adoption of AI coding tools. Now, using those tools, the company has achieved development speeds up to 40% faster while learning critical lessons about governance, tool selection and human expertise that other enterprises can immediately apply. The evolution from Q&A to ‘do it for me’ Mailchimp’s journey reflects a broader shift in how developers interact with AI. Initially, engineers used conversational AI tools for basic guidance and algorithm suggestions. “I think even before vibe coding became a thing, a lot of engineers were already leveraging the existing, conversational AI tools to actually do some form of – hey, is this the right algorithm for the thing that I’m trying to solve for?” Shah noted. The paradigm fundamentally changed with modern AI vibe coding tools. Instead of simple questions and answers, the use of the tools became more about actually doing some of the coding work.  This shift from consultation to delegation represents the core value proposition that enterprises are grappling with today. Mailchimp deliberately adopted multiple AI coding platforms instead of standardizing on one. The company uses Cursor, Windsurf, Augment, Qodo and GitHub Copilot based on a key insight about specialization. “What we realized is, depending on the life cycle of your software development, different tools give you different benefits or different expertise, almost like having an engineer working with you,” Shah said. This approach mirrors how enterprises deploy different specialized tools for different development phases. Companies avoid forcing a one-size-fits-all solution that may excel in some areas while underperforming in others. The strategy emerged from practical testing rather than theoretical planning. Mailchimp discovered through usage that different tools excelled at different tasks within their development workflow. Governance frameworks prevent AI coding chaos Mailchimp’s most critical vibe coding lesson centers on governance. The company implemented both policy-based and process-embedded guardrails that other enterprises can adapt. The policy framework includes responsible AI reviews for any AI-based deployment that touches customer data. Process-embedded controls ensure human oversight remains central. AI may conduct initial code reviews, but human approval is still required before any code is deployed to production. “There’s always going to be a human in the loop,” Shah emphasized. “There’s always going to be a person who will have to refine it, we’ll have to gut check it, make sure it’s actually solving the right problem.” This dual-layer approach addresses a common concern among enterprises. Companies want AI productivity benefits while maintaining code quality and security standards. Context limitations require strategic prompting Mailchimp discovered that AI coding tools face a significant limitation. The tools understand general programming patterns but lack specific knowledge of the business domain. “AI has learned from the industry standards as much as possible, but at the same time, it might not fit in the existing user journeys that we have as a product,” Shah noted. This insight led to a critical realization. Successful AI coding requires engineers to provide increasingly specific context through carefully crafted prompts based on their technical and business knowledge. “You still need to understand the technologies, the business, the domain, and the system architecture, aspects of things at the end of the day, AI helps amplify what you know and what you could do with it,” Shah explained. The practical implication for enterprises: teams need training on both the tools and on how to communicate business context to AI systems effectively. Prototype-to-production gap remains significant AI coding tools excel at rapid prototyping, but Mailchimp learned that prototypes don’t automatically become production-ready code. Integration complexity, security requirements and system architecture considerations still require significant human expertise. “Just because we have a prototype in place, we should not jump to a conclusion that this can be done in  X amount of time,” Shah cautioned. “Prototype does not equate to take the prototype to production.” This lesson helps enterprises set realistic expectations about the impact of AI coding tools on development timelines. The tools significantly help with prototyping and initial development, but they’re not a magic solution for the entire software development lifecycle. Strategic focus shift toward higher-value work The most transformative impact wasn’t just speed. The tools enabled engineers to focus on higher-value activities. Mailchimp engineers now spend more time on system design, architecture and customer workflow integration rather than repetitive coding tasks. “It helps us spend more time on system design and architecture,” Shah explained. “Then really, how

Hard-won vibe coding insights: Mailchimp’s 40% speed gain came with governance price Read More »

Amazon DocumentDB Serverless database looks to accelerate agentic AI, cut costs

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The database industry has undergone a quiet revolution over the past decade. Traditional databases required administrators to provision fixed capacity, including both compute and storage resources. Even in the cloud, with database-as-a-service options, organizations were essentially paying for server capacity that sits idle most of the time but can handle peak loads. Serverless databases flip this model. They automatically scale compute resources up and down based on actual demand and charge only for what gets used. Amazon Web Services (AWS) pioneered this approach over a decade ago with its DynamoDB and has expanded it to relational databases with Aurora Serverless. Now, AWS is taking the next step in the serverless transformation of its database portfolio with the general availability of Amazon DocumentDB Serverless. This brings automatic scaling to MongoDB-compatible document databases. The timing reflects a fundamental shift in how applications consume database resources, particularly with the rise of AI agents. Serverless is ideal for unpredictable demand scenarios, which is precisely how agentic AI workloads behave. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF “We are seeing that more of the agentic AI workloads fall into the elastic and less-predictable end,” Ganapathy (G2) Krishnamoorthy,  VP of AWS Databases, told VentureBeat.”So actually agents and serverless just really go hand in hand.” Serverless vs Database-as-a-Service compared The economic case for serverless databases becomes compelling when examining how traditional provisioning works. Organizations typically provision database capacity for peak loads, then pay for that capacity 24/7 regardless of actual usage. This means paying for idle resources during off-peak hours, weekends and seasonal lulls. “If your workload demand is actually just more dynamic or less predictable, then serverless actually fits best because it gives you capacity and scale headroom, without actually having to pay for the peak at all times,” Krishnamoorthy explained. AWS claims Amazon DocumentDB Serverless can reduce costs by up to 90% compared to traditional provisioned databases for variable workloads. The savings come from automatic scaling that matches capacity to actual demand in real-time. A potential risk with a serverless database, however, can be cost certainty. With a Database-as-a-Service option, organizations typically pay a fixed cost for a ‘T-shirt-sized’ small, medium or large database configuration. With serverless, there isn’t the same specific cost structure in place. Krishnamoorthy noted that AWS has implemented the concept of cost guardrails for serverless databases through minimum and maximum thresholds, preventing runaway expenses. What DocumentDB is and why it matters DocumentDB serves as AWS’s managed document database service with MongoDB API compatibility. Unlike relational databases that store data in rigid tables, document databases store information as JSON (JavaScript Object Notation) documents. This makes them ideal for applications that need flexible data structures. The service handles common use cases, including gaming applications that store player profile details, ecommerce platforms managing product catalogs with varying attributes and content management systems.  The MongoDB compatibility creates a migration path for organizations currently running MongoDB. From a competitive perspective, MongoDB can run on any cloud, while Amazon DocumentDB is only on AWS. The risk of lock-in can potentially be a concern, but it is an issue that AWS is trying to address in different ways. One way is by enabling a federated query capability. Krishnamoorthy noted that it’s possible to use an AWS database to query data that might be in another cloud provider. “It is a reality that most customers have their infrastructure spread across multiple clouds,” Krishnamoorthy said. “We look at, essentially, just what problems are actually customers trying to solve.” How DocumentDB serverless fits into the agentic AI landscape AI agents present a unique challenge for database administrators because their resource consumption patterns are difficult to predict. Unlike traditional web applications, which typically have relatively steady traffic patterns, agents can trigger cascading database interactions that administrators cannot predict. Traditional document databases require administrators to provision for peak capacity. This leaves resources idle during quiet periods. With AI agents, those peaks can be sudden and massive. The serverless approach eliminates this guesswork by automatically scaling compute resources based on actual demand rather than predicted capacity needs. Beyond just being a document database, Krishnamoorthy noted that Amazon DocumentDB Serverless will also support and work with MCP (Model Context Protocol), which is widely used to enable AI tools to work with data. As it turns out, MCP at its core foundation is a set of JSON APIs. As a JSON-based database this can make Amazon DocumentDB a more familiar experience for developers to work with, according to Krishnamoorthy. Why it matters for enterprises: Operational simplification beyond cost savings While cost reduction gets the headlines, the operational benefits of serverless may prove more significant for enterprise adoption. Serverless eliminates the need for capacity planning, one of the most time-consuming and error-prone aspects of database administration. “Serverless actually just scales just right to actually just fit your needs,”Krishnamoorthy said.”The second thing is that it actually reduces the amount of operational burden you have, because you’re not actually just capacity planning.” This operational simplification becomes more valuable as organizations scale their AI initiatives. Instead of database administrators constantly adjusting capacity based on agent usage patterns, the system handles scaling automatically. This frees teams to focus on application development. For enterprises looking to lead the way in AI, this news means document databases in AWS can now scale seamlessly with unpredictable agent workloads while reducing both operational complexity and infrastructure costs. The serverless model provides a foundation for AI experiments that can scale automatically without upfront capacity planning. For enterprises looking to adopt AI later in the cycle, this means serverless architectures

Amazon DocumentDB Serverless database looks to accelerate agentic AI, cut costs Read More »

Is the UK’s New Online Safety Act About Protection or Control?

Key Takeaways New UK Online Safety Act has layers: The UK’s newest provisions to the Online Safety Act claim to protect children from harmful online content (pornographic, specifically), but is that really all there is to it? The people have spoken: Proton VPN reported a 1,400% surge in UK signups only minutes after the bill was passed, showing exactly what many people thought of the new legislation. The ‘porn block’ might involves more control than protection: The new provisions may potentially open the door to content over-moderation, privacy infringement, and security vulnerabilities. After all, online platforms (like PornHub, which suffered multiple data leaks) will have more of your personal data (like your ID). If Orwell had a social media account, it’s pretty likely he would have been shadowbanned by now. Not for being rude, but for being a little too on point. The UK has officially passed and signed into law the Online Safety Act’s newest provision At first glance, it sounds like a well-intended no-brainer: a sweeping effort to protect children from the myriad of harmful content on the internet. But dig a little deeper, and it starts to look a little unsettling. Critics argue that the law might clean up some of the internet’s darker corners, but on the flipside, it runs a high risk of turning the internet into a tightly policed echo chamber. Vague terms like ‘legal but harmful’ give platforms broad leeway to take down anything that could plausibly upset anyone. Is this law a genuine attempt to make the online world safer? Or is it simply a convenient vehicle to exert a little more control over it? What the Law Says, and Why People Are Worried The Online Safety Act gives Ofcom, the UK’s communications regulator (or media watchdog as referred to by the Brits), broad powers to oversee digital platforms. This ranges from social media giants like X (formerly Twitter) to messaging apps, forums, and even online games.  It affects virtually all platforms that let UK users interact with one another, and this includes companies not headquartered in the UK but with a large UK user base. Here’s a summary of what changed on July 25, 2025: The ‘Are you 18?’ checkbox was replaced with an age verification process Facial age estimation and email-based age verification have become necessary steps  Banks and mobile providers will be able to confirm your adult status Official ID verification (driver’s license or passport) to access ‘potentially harmful’ platforms Platforms enforce more restrictive content controls for children Online services must report on actions they take to keep children safe on their platforms Ofcom’s stated goal: to protect users, particularly children, from online harm. This includes a few things that most of us can generally agree are harmful: cracking down on child exploitation, terrorism-related content, and cyberbullying.  However, it gets a little murky with the phrase ‘legal but harmful.’ This gives regulators the right to pressure platforms into removing content that isn’t illegal but that, for whatever reason, they deem harmful.  This may sound harmless on the surface. But who defines what’s ‘harmful’? And to whom is it harmful exactly?  Without a more rigid definition for these things, platforms can (and most likely will) over-police and pre-emptively remove anything ‘potentially’ offensive: a political meme, a protest video, or simply an awkward joke. You know that running gag about people yelling ‘I feel offended by this?’ Well, the UK might have just officialized legal repercussions for these things. This isn’t just hypothetical. Even before the law fully kicked in, users and moderators had reported increased takedowns on platforms like Discord, Reddit, and X. Entire communities have disappeared. Conversations once considered edgy or critical are being throttled, flagged, or shadowbanned.  When your law relies on self-perceived emotional harm to police online content, platforms will start being afraid to host speech that isn’t illegal but that might cause someone to complain. And that’s a losing move for internet users, free speech, and digital freedom. The Public Reaction Was Instant (and Loud) Government officials framed the Online Safety Act as a protective measure; the public, however, seemed to interpret it differently. Within minutes of the law going into effect, UK residents started making moves online, and fast.  Proton VPN, one of the world’s most privacy-focused VPN providers, reported a 1,400% surge in UK signups just minutes after the bill had passed.  No, that’s not a typo. It was the largest spike they’d ever seen from a single country, and unlike other VPN surges triggered by one-off censorship events, such as France’s adult site block, this one didn’t fade. This reaction says a lot more than any think tank or government press release could.  People saw the law for what it could become: a way to monitor, censor, and control. And, accordingly, they responded by masking up digitally and protecting themselves as best they could. There have even been over 446,000 signatures for a petition to repeal the Act. That’s almost half a million people that joined in protest in less than a week since the new provisions passed into law. VPNs, privacy forums, and encrypted apps are no longer niche. They’re becoming a necessary digital self-defense. And remember, this whole UK ‘porn block’ initially started from a 2015 poll made by OnePoll, a survey company known for polls like ‘The World’s Coolest Man Bun’ and ‘Wrong Side of the Bed: Myth or Fact.’ Was It Ever Just About Protecting the Children? On paper, the Online Safety Act is about shielding children from harmful content: a cause we can all get behind. But if that was really the goal, why are adults losing access to forums, private servers, and even something as harmless as political memes? The truth is, this law reaches far beyond explicit material. It’s already affecting: Gaming communities on Discord, where moderators and moderation bots receive directions to remove ‘harmful’ messages  Political discourse on X, where posts criticizing UK policy are disappearing or receiving warnings Encrypted chats, where the threat

Is the UK’s New Online Safety Act About Protection or Control? Read More »

CMA told to expedite action against AWS and Microsoft to rebalance UK cloud market

The Competition and Markets Authority (CMA) has published a summary of the final conclusions it has reached following the completion of its long-running probe into the inner workings of the UK cloud infrastructure services market By Caroline Donnelly, Senior Editor, UK Published: 31 Jul 2025 16:24 The Competition and Markets Authority (CMA) has recommended that Microsoft and Amazon Web Services (AWS) should face “targeted and bespoke” interventions to curb behaviours the watchdog has concluded are harming competition within the UK cloud infrastructure services market. The recommendation features in a summary document published by the UK competition watchdog that outlines the conclusions it has reached now its investigation into the inner workings of the UK cloud infrastructure services market, which began in October 2023, has ended. The eight-page document confirms the CMA will push ahead with its prior proposal that AWS and Microsoft should be subject to targeted remedies to restore competition within the cloud market. This course of action was previously put forward by the watchdog when the provisional findings from its investigation were made public in January 2025. To this end, CMA said it has recommended that its board use powers conferred on it through the roll-out of the Digital Markets, Competition and Consumers Act 2024 (DMCCA) to mark AWS and Microsoft out as suppliers with “strategic market status” (SMS). As confirmed in the document, the CMA board is expected to consider this recommendation in early 2026. “This [action] would enable the CMA to impose targeted and bespoke interventions to address the concerns we have identified,” the CMA’s final report summary document stated. These interventions could also be iteratively adapted in response to changing market conditions, and will be kept under review in case further investigations into the behaviour of AWS and Microsoft are required. “Measures aimed at Microsoft and AWS would address market-wide concerns by directly benefiting most UK customers and producing wider indirect effects by altering the competitive conditions for other providers,” the summary document stated. CMA competition concerns identified The summary document goes on to outline the concerns the CMA has about the “significant unilateral market power” AWS and Microsoft wield within the UK cloud services market, which it claims make it harder for alternative providers to gain a foothold in it. “This harm is exacerbated by the features arriving from technical and commercial barriers to switching [providers] and multicloud,” the CMA report said. “These barriers lock customers into their initial choice of provider, which may not reflect their evolving needs and limit their ability to exercise choice of cloud provider. These barriers can restrict customers from responding to attractive offers or accessing innovative new services from another provider, leading to weaker competition between providers.” Microsoft’s controversial practice of charging IT buyers more for opting to run its software in its competitors’ cloud environments was also flagged as a concern by the CMA for “adversely impacting the competitiveness of AWS and Google in the supply of cloud services”. The report continued: “These licensing practices are a feature that, in combination with the other features we have identified, including Microsoft’s large and increasing market share, further restricts the already limited choice and attractiveness of alternative products and suppliers.” Overall, the CMA said it thinks better customer outcomes would ensue if cloud markets were more competitive: “These outcomes would include more consistently competitive prices, greater prevalence of switching and multi-cloud use, and potentially higher quality and innovation.” Microsoft and AWS react to CMA final thoughts The CMA’s final thoughts on the state of the UK cloud infrastructure services market have garnered a mixed bag of responses, with – perhaps unsurprisingly – AWS and Microsoft both taking umbrage with its conclusions. A Microsoft spokesperson said the CMA “misses the mark again” with its findings, and accused the organisation of ignoring the fact that the cloud market has “never been so dynamic and competitive” and that Google’s hold on the market is growing too. “Its recommendations fail to cover Google, one of the fastest-growing cloud market participants,” the spokesperson said. “Microsoft looks forward to working with the Digital Markets Unit toward an outcome that more accurately reflects the current competition in cloud that benefits UK customers.” A spokesperson for AWS shared a similar sentiment, stating the final report “disregards clear evidence of robust competition” in the UK cloud market: “The action proposed by the inquiry group is unwarranted and undermines the substantial investment and innovation that have already benefited hundreds of thousands of UK businesses. “It risks making the UK a global outlier at a time when businesses need regulatory predictability for the UK to maintain international competitiveness. We will continue to engage constructively with the CMA as they consider their next steps.” Meanwhile, Chris Lindsay, vice-president of customer engineering for Europe, Middle East and Africa (EMEA) at Google Cloud, described the report’s findings in far more glowing terms. “The conclusive finding that restrictive licensing harms cloud customers and competition is a watershed moment for the UK,” he said, before calling for the proposed interventions to be pushed through swiftly. “Swift action…is essential to ensure British businesses pay a fair price and to unleash choice, innovation and economic growth in the UK.” Nicky Stewart, senior adviser to the pro-cloud competition advocacy group, The Open Cloud Coalition, also called on the CMA to “move forward” with urgency in tackling AWS and Microsoft’s behaviour. “Given the alarming anti-competitive behaviour it has identified, the current plan to start this process in early 2026 is nowhere near sufficient,” said Stewart.  “The UK is falling further behind on its digital ambitions around growth and resilience every day we wait.” Mark Boost, CEO at UK-based cloud services provider Civo, said the summary report’s contents seem like a “gesture, rather than a reset”, with its recommendations nothing more than a retread “with softer edges” of the CMA’s provisional findings, released in January 2025. “The CMA has identified the same issues but failed to follow through with the urgency that the market needs,” he said. “The recommendation

CMA told to expedite action against AWS and Microsoft to rebalance UK cloud market Read More »

Scroll to Top