ContentSproute

us-technology

Subpostmasters shoulder costs of Fujitsu’s Post Office IT outage

Fujitsu datacentre outage hit subpostmaster sales for two hours, leaving subpostmasters to seek compensation By Karl Flinders, Chief reporter and senior editor EMEA Published: 23 Jul 2025 11:56 Subpostmasters lost hundreds of thousands of pounds in business through lost sales and costs when Fujitsu’s datacentre outage cut them off from the software that runs their businesses. The collapse of the Horizon system on 17 July will also cause an inevitable increase in lost transactions and create accounting shortfalls – something subpostmasters had to cover, or face potential prosecution over, in the past. While Post Office branches are small businesses, collectively they are a huge organisation relying on the same IT system called Horizon, which is at the centre of the Post Office scandal. As Computer Weekly revealed last week, the Fujitsu outage meant Horizon was not available for hours, meaning the entire network of around 11,500 Post Office branches were unable to run their businesses. During the downtime, customers walked out without making or completing purchases, potentially going elsewhere, while partner firms such as Amazon, DPD and Evri may have looked at other options to leave deliveries. Subpostmasters also had to pay staff who were unable to work. If, for example, during the two-hour outage every branch lost £200 in costs and lost business, that is £2.3m in total. The sizes of branches differ greatly through the network and some larger, busier branches would have lost more significant sums. Subpostmasters are calling for compensation and answers from the Post Office. Subpostmasters have also raised concerns of further potential problems, claiming that Fujitsu – which is on its way out of the Post Office contract after a quarter of a century – might not be fully committed. Who covers losses? Richard Trinder, subpostmaster of three branches in Yorkshire and Derbyshire, and member of the Voice of the Subpostmasters campaign group, said that he lost hundreds of pounds in wages during the downtime. He asked: “Will the Post Office and Fujitsu compensate us for this?” He also said business is lost when partners and customers go elsewhere. “For example, if you are offering Amazon delivery collection and you are down, they will go elsewhere and might never come back.” Mark Baker, a former subpostmaster and current CWU postmaster representative, added: “Will customers who are cut off when in the Post Office ever come back? They will probably use a different branch, which means the individual subpostmaster has lost business.” Regarding customers leaving a branch when an outage hits, Calum Greenhow, CEO at the National Federation of Subpostmasters (NFSP), said: “The Post Office always says, ‘These customers will come back’, but this is not the case because we no longer have a monopoly on many of the products. “We know there is a service-level agreement where the Post Office pays Fujitsu for extra work when required, but we would like to know if there is one where Fujitsu has to pay for Horizon outages,” he added. Greenhow said the NFSP has asked the Post Office whether compensation will be given to subpostmasters for loss of earnings and was told the Post Office would investigate it. There is no service-level agreement between the Post Office and subpostmasters in relation to Horizon availability. Lost in transaction Baker at the CWU raised fears over an expected increase in lost transactions and unexplained losses caused during the outage: “The big problem is if the system is cut off while a transaction is in transit, or in the subpostmaster’s stack, it might not be recovered.” He added that a transaction could also be lost because it’s in a queue at the datacentre. The Post Office scandal, widely recognised as one of the biggest miscarriages of justice in UK history, was triggered by subpostmasters being blamed for unexplained accounting shortfalls. “We are going through a very dangerous period until a new system and support is brought in,” warned Baker. He added that the Post Office needs to look at the entire architecture when replacing Horizon. “It needs to look at the front end, back end and all the bits in between. They also need to look at the robustness of the support mechanisms when there is an outage.” Computer Weekly asked the Post Office whether it would compensate subpostmasters for losses incurred during the latest outage but had not received a response by the time this article was published. Computer Weekly also asked whether the Post Office would take any additional measures during the next accounting period to ensure that unexplained losses caused by transaction failures during the outage are identified. It has not yet responded. The Post Office did not confirm whether Fujitsu will face any financial penalties because of the outage and Fujitsu has not confirmed whether it has identified the cause of the outage. Specialist investigation firm Kroll is currently reviewing the integrity of current Horizon system data and the processes used to identify discrepancies. The investigation followed a report by the Post Office scandal public inquiry, published in September 2024, which raised concerns about the current version of the controversial system. Computer Weekly asked the Post Office whether Kroll will include the latest incident as part of its review of the Horizon system, but it did not answer. The Post Office scandal was first exposed by Computer Weekly in 2009, revealing the stories of seven subpostmasters and the problems they suffered due to Horizon accounting software, which led to the most widespread miscarriage of justice in British history (see below timeline of Computer Weekly articles about the scandal since 2009). Read more on IT for retail and logistics Kroll reviewing Post Office Horizon’s current integrity and discrepancy identification By: Karl Flinders Post Office scandal data leak interim compensation offers made By: Karl Flinders Government announcement on Fujitsu talks add ‘vague words’ and no interim payment By: Karl Flinders Metropolitan Police concern puts brakes on Post Office Horizon data migration By: Karl Flinders Read More

Subpostmasters shoulder costs of Fujitsu’s Post Office IT outage Read More »

Interview: Is there an easier way to refactor applications?

We speak to the inventor of OpenRewrite about how enterprise IT can manage code across thousands of source code repros By Cliff Saran, Managing Editor Published: 23 Jul 2025 11:45 Looking at a typical Java migration, Jonathan Schneider, CEO and co-founder of Moderne, believes the approach organisations tend to take is unsustainable.  Recalling a conversation with a major bank that needed to migrate to at least Java 17 to fix a particular vulnerability, he says: “The bank was pinned to Java 8 because it was using WebSphere.” Unless the bank moved applications from the WebSphere Java application server to the Tomcat alternative and upgraded to Java 17, it would not be able to resolve this particular Java vulnerability, adds Schneider. The challenge, he says, “is how to refactor 3,000 applications onto a more modern Java environment in a way that avoids breaking them”. Application modernisation is a major headache for IT departments, leading to a drain on resources and a greater cyber security risk, due to older, unpatched code containing known vulnerabilities. A recent report from analyst Forrester highlights the risk organisations face as they battle to maintain legacy application code, while attempting to respond to market volatility. Forrester says technical debt both increases IT costs and risks while slowing down the delivery of new capabilities. It urges IT leaders to outsource support for technical debt to a provider, which then enables the IT team to drive forward modern IT architecture and delivery practices. “Outsourcing the legacy tech stack to proven outsource providers will ensure operational reliability at a negotiated cost, and free up funds and teams to build a modern, adaptive and AI [artificial intelligence]-powered ecosystem that drives innovation and positions you for future growth,” analysts Sharyn Leaver, Eric Brown, Riley McDonnell and Rachel Birrell state in Forrester’s Budget planning: Prepare for even more volatility report. Application modernisation approaches are not scalable But whether it is the responsibility of an in-house team or an outsourcer, according to Schneider, the traditional way to manage technical debt is not working. Historically, he points out that code was left with product engineers to continue to revise the application going forward and keep it up to date. Sometimes, he says, an IT consulting firm would be brought in to establish a software factory, providing application maintenance, working on one application at a time. According to Schneider, this approach has not worked. The approach Moderne takes is to consider tasks that can be solved horizontally, across the whole business. Schneider used to work at Netflix and is the inventor of OpenRewrite, an open-source software auto-refactoring tool, and has built a business around the complexity of keeping code current. Every piece of code created basically ends up as technical debt as soon as it is deployed into production. “I could make all the perfect decisions around an application’s architecture and pick all the best libraries today, then, two months from now, for one reason or another, it’s no longer optimal,” he says. Moderne effectively scans enterprise source code and produces a lossless semantic tree (see Swapping out a software library) of the code, stored in a database. This can then be queried to understand the impact of code changes. It can also be used with recipes that enable software developers to replace software libraries in an automated fashion. Software developers can see if the recipe produces the desired results from a coding standpoint; they can tweak it if necessary, before running it to make the required change across the entire code base. Using AI with coding recipes These recipes can be created using a large language model (LLM) like Claude Code. “A couple of weeks ago, a banking executive said he was trying to move applications from on-prem to containerised,” says Schneider. “But the key problem was that the applications were writing log files to disk.” This, he says, blocked the migration. “We needed to alter the logging configuration and change the code itself so that it does not write to disk,” adds Schneider. He believes writing custom recipes to do these kinds of transformations involves learning the programming framework and becoming an expert at recipe development. However, by using Claude Code, Schneider says it took just 20 minutes to create a brand new custom recipe. “Claude Code wrote the first 10 or so patterns to modify different kinds of logging configuration and how to route this stuff out,” he says. “We could then take that recipe, use it across the first 9,000 source code repositories and see the kinds of changes that were being made.” The developer can assess the patterns produced by the recipe to check if they work and then feed them back into Claude AI iteratively to produce similar patterns or improve a pattern the developer considered unsuitable.  For Schneider, the recipe, rather like a cooking recipe, is a set of instructions that can be followed step by step, to deploy a code change. The recipe can also be tweaked and improved. “Once you are comfortable with the changes, you then have a deterministic machine to stamp it out everywhere,” he says. “We get a kind of quick iteration feedback,” adds Schneider. “At the end of the day, what you don’t have is a probabilistic system, like an LLM, making all the code edits. Rather, the probabilistic system writes a recipe that becomes a deterministic machine to make the change across the whole code base.”  He says that given the volume of code in production, IT departments need an approach that scales. “It’s hard to imagine just how much code is out there,” says Schneider. At one of Moderne’s larger customers, he says almost five billion lines of source code is being managed.  For Schneider, AI-based refactoring where the source code is loaded into an LLM does not stack up. The cost alone can amount to millions of dollars, which makes the approach he and Moderne takes in using Claude AI just to create recipes a potential big cost-saver. Moderne is on

Interview: Is there an easier way to refactor applications? Read More »

Reverse engineering GitHub Actions cache to make it fast

TL;DR We reverse engineered GitHub Actions cache internals to transparently route cache requests through our faster, colocated cache. This delivered up to 10x faster cache performance for some of our customers, with no code changes required and no need to maintain forks of upstream actions. Before this work began, we already had a faster alternative to Github Actions cache. Our approach was different: we forked each of the popular first-party actions that depended on Actions cache to point to our faster, colocated cache. But my coworkers weren’t satisfied with that solution, since it required users to change a single line of code. Apart from the user experience, maintaining these forks steadily turned into a nightmare for us. We kept at it for a while, but eventually reached an inflection point, and the operational cost became too high. So, I set out to reverse engineer GitHub Actions cache itself, with one goal: make it fast. Really fast. And this time, without having to maintain forks or requiring the user to change a single line of code. Not one. Sniffing out GitHub cache requests The first step was fully understanding the inner workings of the GitHub Actions cache. Our prior experience forking the existing cache actions proved helpful, but earlier this year, GitHub threw us a curveball by deprecating its legacy cache actions in favor of a new Twirp-based service using the Azure Blob Storage SDK. Although a complete redesign, it was a win in our eyes — we love Protobufs. They’re easy to reason about, and once we could reverse engineer the interface, we could spin up a fully compatible, blazing-fast alternative.  Enter our new friend: Claude. (It’s 2025, after all.) After a few iterations of creative prompt engineering, we sniffed out the requests GitHub made to its control plane and came up with a proto definition of the actions service. If you’re hacking on similar black boxes, I highly recommend trusting an LLM with this. But what about the Azure Blob Storage? The GitHub system switched to Azure, but our cache backend runs atop a self-hosted MinIO cluster, which is an S3-compatible blob storage. In an ideal world, all blob stores would be interchangeable, but we do not live in an ideal world (at least, not yet). We had to figure out the shape of those requests. It took a little more effort to figure it out, but in the end, all roads led to network proxies. ‍Proxy here, proxy there, proxy everywhere‍ Achieving a truly seamless experience with zero required code changes requires some magic: every VM request still appears to go to the original destination (i.e. GitHub’s control plane and Azure Blob Storage), but under the hood, we sneakily redirect them within the network stack.  Now, a little color on the context: Blacksmith is a high-performance, multi-tenant CI cloud for GitHub Actions. Our fleet runs on bare-metal servers equipped with high single-core performance gaming CPUs and NVMe drives. At the time of writing this, we manage 500+ hosts across several data centers globally, spinning up ephemeral Firecracker VMs for each customer’s CI job. Every job runs Ubuntu with GitHub-provided root filesystems. With the context set, let’s talk implementation.  VM proxy Inside each VM, we configured a lightweight NGINX server to proxy requests back to our host-level proxy. Why this extra layer? It’s simple: we need to maintain state for every upload and download, and access control is non-negotiable. By handling proxying inside the VM, we pick up a nice bonus: jobs running inside Docker containers can have their egress traffic cleanly intercepted and routed through our NGINX proxy. No special hacks required. These proxy servers are smart about what they forward. Cache-related requests are all redirected to our host proxy, while other GitHub control plane requests — such as those we don’t handle, like GitHub artifact store — go straight to their usual destinations. The choice of NGINX came down to practicality. All our root file systems ship with NGINX preinstalled, and the proxying we do here is dead simple. Sometimes the best tool is the one that’s already in the box, and in this case, there was no need to look any further. Fighting the Azure SDK  While NGINX takes care of request routing for the GitHub Actions control plane, getting things to play nicely with the Azure SDK called for some serious kernel-level network gymnastics.  We were several cycles deep into our implementation when a surprising reality emerged: our new caching service was lagging behind our legacy version, particularly when it came to downloads. Curious, we drove back into the source code of GitHub toolkit. What we found was telling: if the hostname isn’t recognized as an Azure Blob Storage (e.g., blob.core.windows.net), the toolkit quietly skips many of its concurrency optimizations. Suddenly, the bottleneck made sense. To address this, we performed some careful surgery. We built our own Azure-like URLs, then a decoder and translator in our host proxy to convert them into S3-compatible endpoints. Only then did the pieces fall into place, and performance once again became a nonissue. We started with VM-level DNS remapping to map the Azure-like URL to our VM agent host. But redirecting just these specific requests to our host-level proxy required an additional step to get there. Our initial implementation at this proxying layer leaned on iptables rules to steer the right traffic toward our host proxy. It worked, at least until it didn’t. Through testing, we quickly hit the limits: iptables was already doing heavy lifting for other subsystems inside our environment, and with each VM adding or removing its own set of rules, things got messy fast, and extremely flakey. That led us to nftables, the new standard for packet filtering on Linux, and a perfect fit for our use case:  Custom rule tables: Namespacing rules per VM became simple, making it straightforward to add or remove these rules.  Atomic configuration changes: Unlike iptables, nftables allows us to atomically swap out entire config blocks. This avoids conflicts

Reverse engineering GitHub Actions cache to make it fast Read More »

Cops say criminals use a Google Pixel with GrapheneOS – I say that’s freedom

Calvin Wankhede / Android Authority Police in Spain have reportedly started profiling people based on their phones; specifically, and surprisingly, those carrying Google Pixel devices. Law enforcement officials in Catalonia say they associate Pixels with crime because drug traffickers are increasingly turning to these phones. But it’s not Google’s secure Titan M2 chip that has criminals favoring the Pixel — instead, it’s GrapheneOS, a privacy-focused alternative to the default Pixel OS. As someone who has used a Pixel phone with GrapheneOS, I find this assumption a bit unsettling. I have plenty of reasons to use GrapheneOS, and avoiding law enforcement isn’t on the list at all. In fact, I think many Pixel users would benefit from switching to GrapheneOS over the default Android operating system. And no, my reasons don’t have anything to do with criminal activity. Why I use and recommend GrapheneOS A privacy-focused operating system may seem more trouble than it’s worth. But when I replaced Google’s Pixel OS with GrapheneOS, I found it to be a transformative experience. For one, the installation was painless, and I didn’t lose any modern software features. Installing aftermarket operating systems used to equal a compromised smartphone experience, but I didn’t find that to be true in the case of GrapheneOS. Case in point: even though GrapheneOS doesn’t include any Google services, I was surprised to find that you can install the Play Store with relative ease and almost all apps work flawlessly — even most banking ones. This is impressive for any open-source fork of Android, but GrapheneOS goes above and beyond in that it also has some major privacy and security benefits. Primarily, it locks down various parts of Android to reduce the number of attack vectors and enforces stricter sandboxing to ensure that apps remain isolated from each other. GrapheneOS just works, with almost no feature or usability compromises. Take Google apps as an example. On almost all Android phones sold outside China, Google has far-reaching and system-level access to everything: your precise location, contacts, app usage, network activity, and a load of other data. You cannot do anything to stop it, whether you’d like to or not. However, you can with GrapheneOS because it treats Google apps like any other piece of unknown software. This means Google apps are forced to run in a sandbox where they have limited access to your data. GrapheneOS’ sandboxing extends to invasive apps like Google Play Services and the Play Store. You can explicitly disable each and every permission for these apps manually — in fact, most permissions are disabled by default. Even better, you can create different user profiles to isolate apps that require lots of permissions. GrapheneOS can forward notifications to the primary user profile, unlike stock Android. GrapheneOS limits Google’s reach into your phone more than any other flavor of Android. On the subject of app permissions, GrapheneOS builds on that, too. For example, you can stop apps from accessing the internet and reading your device’s sensors — stock Android doesn’t expose such granular control. And while Android permissions often take the all-or-nothing approach, GrapheneOS lets you select only the exact contacts, photos, or files that you want visible to an app. Finally, my favorite GrapheneOS feature is the ability to set a duress PIN. When entered, this secondary PIN will initiate a permanent deletion of all data on the phone, including installed eSIMs. If I’m ever forced to give up my phone’s password, I can take solace in the fact that the attacker will not have access to my data. If you have nothing to hide… You might be wondering: if I don’t have anything to hide, why should I bother using GrapheneOS? That’s a fair question, but it misses the point. I don’t use GrapheneOS because I have something to hide — I use it to exercise control over the device I own. I find it comforting that Google cannot collect data to nearly the same extent if I use GrapheneOS instead of Pixel OS. The benefits of using GrapheneOS extend far beyond just hiding from Google, though, and it’s why the project has landed under the scanner of law enforcement. I believe that GrapheneOS catching attention from law enforcement just proves how much it raises the bar on privacy. GrapheneOS has built a number of app isolation-based safeguards to ensure that your phone cannot be infected remotely. The technical details are longer than I can list, but in essence, the developers stripped out parts of Android’s code that could be exploited by bad actors. Some security improvements have even been suggested and incorporated into AOSP, meaning GrapheneOS’ efforts have made all of our devices a tiny bit more secure. Does GrapheneOS take privacy and security too far? Megan Ellis / Android Authority GrapheneOS is one of many tools that now face suspicion and political pressure simply for making surveillance harder. Take the Signal app as another example. The encrypted messaging app has been repeatedly targeted by EU lawmakers in recent years. Specifically, a proposed “Chat Control” legislation would compel secure messaging platforms to scan all communication — including those protected by end-to-end encryption — for illegal content such as Child Sexual Assault Material. Messaging apps in the EU would be required to scan private communications before they’re encrypted, on the user’s device, and report anything that looks suspicious. While encryption itself wouldn’t be banned, Signal’s developers have rightly pointed out that mandatory on-device scanning essentially equals a backdoor. A rogue government could misuse these privileges to spy on dissenting citizens or political opponents, while hackers might be able to steal financial information. Regulators have long asked privacy apps to compromise on their singular mission: privacy. There’s a bitter irony here, too, as GrapheneOS recently pointed out in a tweet. The Spanish region of Catalonia was at the center of the massive Pegasus spyware scandal in 2019. Pegasus, a sophisticated surveillance tool sold exclusively to governments, was reportedly used to hack phones belonging to Members of

Cops say criminals use a Google Pixel with GrapheneOS – I say that’s freedom Read More »

Fighting forever chemicals and startup fatigue

What if we could permanently remove the toxic “forever chemicals” contaminating our water? That’s the driving force behind Michigan-based startup Enspired Solutions, founded by environmental toxicologist Denise Kay and chemical engineer Meng Wang. The duo left corporate consulting in the rearview mirror to take on one of the most pervasive environmental challenges: PFAS. “PFAS is referred to as a forever chemical because it is so resistant to break down,” says Kay. “It does not break down naturally in the environment, so it just circles around and around. This chemistry, which would break that cycle and break the molecule apart, could really support the health of all of us.” Basing the company in Michigan was both a strategic and a practical strategy. The state has been a leader in PFAS regulation with a startup infrastructure—buoyed by the Michigan Economic Development Corporation (MEDC)—that helped turn an ambitious vision into a viable business. From intellectual property analyses to forecasting finances and fundraising guidance, the MEDC’s programs offered Kay and Wang the resources to focus on building their PFASigator: a machine the size of two large refrigerators that uses ultraviolet light and chemistry to break down PFAS in water. In other words, “it essentially eats PFAS.” Despite the support from the MEDC, the journey has been far from smooth. “As people say, being an entrepreneur and running a startup is like a rollercoaster,” Kay says. “You have high moments, and you have very low moments when you think nothing’s ever going to move forward.” Without revenue or salaries in the early days, the co-founders had to be sustained by something greater than financial incentive. “If problem solving and learning new talents do not provide sufficient intrinsic reward for a founder to be satisfied throughout what I guarantee will be a long duration effort, then that founder may need to reset their expectations. Because the financial rewards of entrepreneurship are small throughout the process.” Still, Kay remains optimistic about the road ahead for Enspired Solutions, for clean water innovation, and for other founders walking down a similar path. “Often, founders are coached about formulas for fundraising, formulas for startup success. Learning those formulas and expectations is important, but it’s also important to not forget that it’s your creativity and innovation and foresight that got you to the place you’re in and drove you to start a company. Ultimately, people still want to see that shine through.” This episode of Business Lab is produced in partnership with the Michigan Economic Development Corporation. Full Transcript Megan Tatum: From MIT Technology Review, I’m Megan Tatum. This is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Today’s episode is brought to you in partnership with the Michigan Economic Development Corporation. Our topic today is launching a technology startup in the US state of Michigan. Building out an innovative idea into a viable product and company requires knowledge and resources that individuals might not have. That’s why the Michigan Economic Development Corporation, or the MEDC, has launched an innovation campaign to support technology entrepreneurs. Two words for you: startup ecosystem. My guest is Dr. Denise Kay, the co-founder and CEO at Enspired Solutions, a Michigan-based startup focused on removing synthetic forever chemicals called PFAS from water. Welcome, Denise. Dr. Denise Kay: Hi, Megan. Megan: Hi. Thank you so much for joining us. To get us started, Denise, I wondered if we could talk about Enspired Solutions a bit more. How did the idea come about, and what does your company do? Denise: Well, my co-founder, Meng, and I had careers in consulting, advising clients on the fate and toxicity of chemicals in the environment. What we did was evaluate how chemicals moved through soil, water, and air, and what toxic impact they might have on humans and wildlife. That put us in a really unique position to see early on the environmental and health ramifications of the manmade chemical PFAS in our environment. When we learned of a very novel and elegant chemistry that could effectively destroy PFAS, we could foresee the value in making this chemistry available for commercial use and the potential for a significant positive impact on maintaining healthy water resources for all of us. Like you mentioned, PFAS is referred to as a forever chemical because it is so resistant to break down. It does not break down naturally in the environment, so it just circles around and around. This chemistry, which would break that cycle and break the molecule apart, could really support the health of all of us. Ultimately, Meng and I quit our jobs, and we founded Enspired Solutions. Our objective was to design, manufacture, and sell commercial-scale equipment that destroys PFAS in water based on this laboratory bench-scale chemistry that had been discovered, the goal being that this toxic contaminant does not continue to circulate in our natural resources. At this point, we have won an award from the EPA and Department of Defense, and proven our technology in over 200 different water samples ranging from groundwater, surface water, landfill leachate, industrial wastewater, [and] municipal wastewater. It’s really everywhere. What we’re seeing traction in right now is customer applications managing semiconductor waste. Groundwater and surface water around airports tend to be high in PFAS. Centralized waste disposal facilities that collect and manage PFAS-contaminated liquids. And also, even transitioning firetrucks to PFAS-free firefighting foams. Megan: Fantastic. That’s a huge breadth of applications, incredible stuff. Denise: Yeah. Megan: You launched about four years ago now. I wondered what factors made Michigan the right place to build and grow the company? Denise: That is something we put a lot of thought into, because I live in Michigan, and Meng lives in Illinois, so when it was just the two of us, there was even that, “Okay, what is going to be our headquarters?” We looked at a number of factors. Some of the things we considered were rentable incubator space. By

Fighting forever chemicals and startup fatigue Read More »

The Download: how to melt rocks, and what you need to know about AI

Plus: what’s going on with America’s data centers? This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This startup wants to use beams of energy to drill geothermal wells Geothermal startup Quaise certainly has an unconventional approach when it comes to destroying rocks: it uses a new form of drilling technology to melt holes through them. The company hopes it’s the key to unlocking geothermal energy and making it feasible anywhere. Quaise’s technology could theoretically be used to tap into the Earth’s heat from anywhere on the globe. But some experts caution that reinventing drilling won’t be as simple, or as fast, as Quaise’s leadership hopes. Read the full story. —Casey Crownhart Five things you need to know about AI right now —Will Douglas Heaven, senior editor for AI Last month I gave a talk at SXSW London called “Five things you need to know about AI”—my personal picks for the five most important ideas in AI right now.  I aimed the talk at a general audience, and it serves as a quick tour of how I’m thinking about AI in 2025. There’s some fun stuff in there. I even make jokes!  You can now watch the video of my talk, but if you want to see the five I chose right now, here is a quick look at them. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Why it’s so hard to make welfare AI fair There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness. But the city of Amsterdam spent a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right? Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here! The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 America’s grand data center ambitions aren’t being realized A major partnership between SoftBank and OpenAI hasn’t got off to a flying start. (WSJ $)+ The setback hasn’t stopped OpenAI opening its first DC office. (Semafor) 2 OpenAI is partnering with the UK governmentIn a bid to increase its public services’ productivity and to drive economic growth. (BBC)+ It all sounds pretty vague. (Engadget) 3 The battle for AI math supremacy is heating upGoogle and OpenAI went head to head in a math competition—but only one played by the rules. (Axios)+ The International Math Olympiad poses a unique challenge to AI models. (Ars Technica)+ What’s next for AI and math. (MIT Technology Review) 4 Mark Zuckerberg’s secretive Hawaiian compound is getting biggerThe multi-billionaire is sinking millions of dollars into the project. (Wired $) 5 India’s back offices are meeting global demand for AI expertise New ‘capability centers’ could help to improve the country’s technological prospects. (FT $)+ The founder of Infosys believes the future of AI will be more democratic. (Rest of World)+ Inside India’s scramble for AI independence. (MIT Technology Review) 6 A crime-tracking app will share videos with the NYPDPublic safety agencies will have access to footage shared on Citizen. (The Verge)+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review) 7 China has a problem with competition: there’s too much of itIts government is making strides to crack down on price wars within sectors. (NYT $)+ China’s Xiaomi is making waves across the world. (Economist $) 8 The metaverse is a tobacco marketer’s playground 🚬Fed up of legal constraints, they’re already operating in unregulated spaces. (The Guardian)+ Welcome to the oldest part of the metaverse. (MIT Technology Review) 9 How AI is shaking up physicsModels are suggesting outlandish ideas that actually work. (Quanta Magazine) 10 Tesla has opened a diner that resembles a spaceshipIt’s technically a drive-thru that happens to sell Tesla merch. (TechCrunch) Quote of the day  “If you can pick off the individuals for $100 million each and they’re good, it’s actually a bargain.” —Entrepreneur Laszlo Bock tells Insider why he thinks the eye-watering sums Meta is reportedly offering top AI engineers is money well spent. One more thing The world’s first industrial-scale plant for green steel promises a cleaner futureAs of 2023, nearly 2 billion metric tons of steel were being produced annually, enough to cover Manhattan in a layer more than 13 feet thick. Making this metal produces a huge amount of carbon dioxide. Overall, steelmaking accounts for around 8% of the world’s carbon emissions—one of the largest industrial emitters and far more than such sources as aviation. A handful of groups and companies are now making serious progress toward low- or zero-emission steel. Among them, the Swedish company Stegra stands out. The startup is currently building the first industrial-scale plant in the world to make green steel. But can it deliver on its promises? Read the full story. —Douglas Main Read More

The Download: how to melt rocks, and what you need to know about AI Read More »

This startup wants to use beams of energy to drill geothermal wells

A beam of energy hit the slab of rock, which quickly began to glow. Pieces cracked off, sparks ricocheted, and dust whirled around under a blast of air.  From inside a modified trailer, I peeked through the window as a millimeter-wave drilling rig attached to an unassuming box truck melted a hole into a piece of basalt in less than two minutes. After the test was over, I stepped out of the trailer into the Houston heat. I could see a ring of black, glassy material stamped into the slab fragments, evidence of where the rock had melted.   This rock-melting drilling technology from the geothermal startup Quaise is certainly unconventional. The company hopes it’s the key to unlocking geothermal energy and making it feasible anywhere. Geothermal power tends to work best in those parts of the world that have the right geology and heat close to the surface. Iceland and the western US, for example, are hot spots for this always-available renewable energy source because they have all the necessary ingredients. But by digging deep enough, companies could theoretically tap into the Earth’s heat from anywhere on the globe. That’s a difficult task, though. In some places, accessing temperatures high enough to efficiently generate electricity would require drilling miles and miles beneath the surface. Often, that would mean going through very hard rock, like granite. Quaise’s proposed solution is a new mode of drilling that eschews the traditional technique of scraping into rock with a hard drill bit. Instead, the company plans to use a gyrotron, a device that emits high-frequency electromagnetic radiation. Today, the fusion power industry uses gyrotrons to heat plasma to 100 million °C, but Quaise plans to use them to blast, melt, and vaporize rock. This could, in theory, make drilling faster and more economical, allowing for geothermal energy to be accessed anywhere.   Since Quaise’s founding in 2018, the company has demonstrated that its systems work in the controlled conditions of the laboratory, and it has started trials in a semi-controlled environment, including the backyard of its Houston headquarters. Now these efforts are leaving the lab, and the team is taking gyrotron drilling technology to a quarry to test it in real-world conditions.  Some experts caution that reinventing drilling won’t be as simple, or as fast, as Quaise’s leadership hopes. The startup is also attempting to raise a large funding round this year, at a time when economic uncertainty is slowing investment and the US climate technology industry is in a difficult spot politically because of policies like tariffs and a slowdown in government support. Quaise’s big idea aims to accelerate an old source of renewable energy. This make-or-break moment might determine how far that idea can go.  Blasting through Rough calculations from the geothermal industry suggest that enough energy is stored inside the Earth to meet our energy demands for tens or even hundreds of thousands of years, says Matthew Houde, cofounder and chief of staff at Quaise. After that, other sources like fusion should be available, “assuming we continue going on that long, so to speak,” he quips.  “We want to be able to scale this style of geothermal beyond the locations where we’re able to readily access those temperatures today with conventional drilling,” Houde says. The key, he adds, is simply going deep enough: “If we can scale those depths to 10 to 20 kilometers, then we can enable super-hot geothermal to be worldwide accessible.” Though that’s technically possible, there are few examples of humans drilling close to this depth. One research project that began in 1970 in the former Soviet Union reached just over 12 kilometers, but it took nearly 20 years and was incredibly expensive.  Quaise hopes to speed up drilling and cut its cost, Houde says. The company’s goal is to drill through rock at a rate of between three and five meters per hour of steady operation. One key factor slowing down many operations that drill through hard rocks like granite is nonproductive time. For example, equipment frequently needs to be brought all the way back up to the surface for repairs or to replace drill bits. Quaise’s key to potentially changing that is its gyrotron. The device emits millimeter waves, beams of energy with wavelengths that fall between microwaves and infrared waves. It’s a bit like a laser, but the beam is not visible to the human eye.  Quaise’s goal is to heat up the target rock, effectively drilling it away. The gyrotron beams waves at a target rock via a waveguide, a hollow metal tube that directs the energy to the right spot. (One of the company’s main technological challenges is to avoid accidentally making plasma, an ionized, superheated state of matter, as it can waste energy and damage key equipment like the waveguide.) Here’s how it works in practice: When Quaise’s rig is drilling a hole, the tip of the waveguide is positioned a foot or so away from the rock it’s targeting. The gyrotron lets out a burst of millimeter waves for about a minute. They travel down the waveguide and hit the target rock, which heats up and then cracks, melts, or even vaporizes. Then the beam stops, and the drill bit at the end of the waveguide is lowered to the surface of the rock, rotating and scraping off broken shards and melted bits of rock as it descends. A steady blast of air carries the debris up to the surface, and the process repeats. The energy in the millimeter waves does the hard work, and the scraping and compressed air help remove the fractured or melted material away. This system is what I saw in action at the company’s Houston headquarters. The drilling rig in the yard is a small setup, something like what a construction company might use to drill micro piles for a foundation or what researchers would use to take geological samples. In total, the gyrotron has a power of 100 kilowatts. A cooling system helps the superconducting

This startup wants to use beams of energy to drill geothermal wells Read More »

Five things you need to know about AI right now

The video is now available (thank you, SXSW London). Below is a quick look at my top five. Let me know if you would have picked different ones! 1. Generative AI is now so good it’s scary. Maybe you think that’s obvious. But I am constantly having to check my assumptions about how fast this technology is progressing—and it’s my job to keep up.  A few months ago, my colleague—and your regular Algorithm writer—James O’Donnell shared 10 music tracks with the MIT Technology Review editorial team and challenged us to pick which ones had been produced using generative AI and which had been made by people. Pretty much everybody did worse than chance. What’s happening with music is happening across media, from code to robotics to protein synthesis to video. Just look at what people are doing with new video-generation tools like Google DeepMind’s Veo 3. And this technology is being put into everything. My point here? Whether you think AI is the best thing to happen to us or the worst, do not underestimate it. It’s good, and it’s getting better. 2. Hallucination is a feature, not a bug. Let’s not forget the fails. When AI makes up stuff, we call it hallucination. Think of customer service bots offering nonexistent refunds, lawyers submitting briefs filled with nonexistent cases, or RFK Jr.’s government department publishing a report that cites nonexistent academic papers.  You’ll hear a lot of talk that makes hallucination sound like it’s a problem we need to fix. The more accurate way to think about hallucination is that this is exactly what generative AI does—what it’s meant to do—all the time. Generative models are trained to make things up. What’s remarkable is not that they make up nonsense, but that the nonsense they make up so often matches reality. Why does this matter? First, we need to be aware of what this technology can and can’t do. But also: Don’t hold out for a future version that doesn’t hallucinate. 3. AI is power hungry and getting hungrier. You’ve probably heard that AI is power hungry. But a lot of that reputation comes from the amount of electricity it takes to train these giant models, though giant models only get trained every so often. What’s changed is that these models are now being used by hundreds of millions of people every day. And while using a model takes far less energy than training one, the energy costs ramp up massively with those kinds of user numbers.  ChatGPT, for example, has 400 million weekly users. That makes it the fifth-most-visited website in the world, just after Instagram and ahead of X. Other chatbots are catching up.  So it’s no surprise that tech companies are racing to build new data centers in the desert and revamp power grids. The truth is we’ve been in the dark about exactly how much energy it takes to fuel this boom because none of the major companies building this technology have shared much information about it.  That’s starting to change, however. Several of my colleagues spent months working with researchers to crunch the numbers for some open source versions of this tech. (Do check out what they found.) 4. Nobody knows exactly how large language models work. Sure, we know how to build them. We know how to make them work really well—see no. 1 on this list. But how they do what they do is still an unsolved mystery. It’s like these things have arrived from outer space and scientists are poking and prodding them from the outside to figure out what they really are. It’s incredible to think that never before has a mass-market technology used by billions of people been so little understood. Why does that matter? Well, until we understand them better we won’t know exactly what they can and can’t do. We won’t know how to control their behavior. We won’t fully understand hallucinations. 5. AGI doesn’t mean anything. Not long ago, talk of AGI was fringe, and mainstream researchers were embarrassed to bring it up. But as AI has got better and far more lucrative, serious people are happy to insist they’re about to create it. Whatever it is. AGI—or artificial general intelligence—has come to mean something like: AI that can match the performance of humans on a wide range of cognitive tasks. But what does that mean? How do we measure performance? Which humans? How wide a range of tasks? And performance on cognitive tasks is just another way of saying intelligence—so the definition is circular anyway. Essentially, when people refer to AGI they now tend to just mean AI, but better than what we have today. There’s this absolute faith in the progress of AI. It’s gotten better in the past, so it will continue to get better. But there is zero evidence that this will actually play out.  So where does that leave us? We are building machines that are getting very good at mimicking some of the things people do, but the technology still has serious flaws. And we’re only just figuring out how it actually works. Here’s how I think about AI: We have built machines with humanlike behavior, but we haven’t shrugged off the habit of imagining a humanlike mind behind them. This leads to exaggerated assumptions about what AI can do and plays into the wider culture wars between techno-optimists and techno-skeptics. It’s right to be amazed by this technology. It’s also right to be skeptical of many of the things said about it. It’s still very early days, and it’s all up for grabs. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Read More

Five things you need to know about AI right now Read More »

Lumma infostealer malware returns after law enforcement disruption

The Lumma infostealer malware operation is gradually resuming activities following a massive law enforcement operation in May, which resulted in the seizure of 2,300 domains and parts of its infrastructure. Although the Lumma malware-as-a-service (MaaS) platform suffered significant disruption from the law enforcement action, as confirmed by early June reports on infostealer activity, it didn’t shut down. The operators immediately acknowledged the situation on XSS forums, but claimed that their central server had not been seized (although it had been remotely wiped), and restoration efforts were already underway. Lumma admin’s first message after the law enforcement actionSource: Trend Micro Gradually, the MaaS built up again and regained trust within the cybercrime community, and is now facilitating infostealing operations on multiple platforms again. According to Trend Micro analysts, Lumma has almost returned to pre-takedown activity levels, with the cybersecurity firm’s telemetry indicating a rapid rebuilding of infrastructure. “Following the law enforcement action against Lumma Stealer and its associated infrastructure, our team has observed clear signs of a resurgence in Lumma’s operations,” reads the Trend Micro report. “Network telemetry indicates that Lumma’s infrastructure began ramping up again within weeks of the takedown.” New Lumma C2 domainsSource: Trend Micro Trend Micro reports that Lumma still uses legitimate cloud infrastructure to mask malicious traffic, but has now shifted from Cloudflare to alternative providers, most notably the Russian-based Selectel, to avoid takedowns. The researchers have highlighted four distribution channels that Lumma currently uses to achieve new infections, indicating a full-on return to multifaceted targeting. Fake cracks/keygens: Fake software cracks and keygens are promoted via malvertising and manipulated search results. Victims are directed to deceptive websites that fingerprint their system using Traffic Detection Systems (TDS) before serving the Lumma Downloader. ClickFix: Compromised websites display fake CAPTCHA pages that trick users into running PowerShell commands. These commands load Lumma directly into memory, helping it evade file-based detection mechanisms. GitHub: Attackers are actively creating GitHub repositories with AI-generated content advertising fake game cheats. These repos host Lumma payloads, like “TempSpoofer.exe,” either as executables or in ZIP files. YouTube/Facebook: Current Lumma distribution also involves YouTube videos and Facebook posts promoting cracked software. These links lead to external sites hosting Lumma malware, which sometimes abuses trusted services like sites.google.com to appear credible. Malicious GitHub repository (left) and YouTube video (right) distributing Lumma payloadsSource: Trend Micro The re-emergence of Lumma as a significant threat demonstrates that law enforcement action, devoid of arrests or at least indictments, is ineffective in stopping these determined threat actors. MaaS operations, such as Lumma, are incredibly profitable, and the leading operators behind them likely view law enforcement action as routine obstacles they merely have to navigate. The Board Report Deck CISOs Actually Use CISOs know that getting board buy-in starts with a clear, strategic view of how cloud security drives business value. This free, editable board report deck helps security leaders present risk, impact, and priorities in clear business terms. Turn security updates into meaningful conversations and faster decision-making in the boardroom. Read More

Lumma infostealer malware returns after law enforcement disruption Read More »

Windows 11 KB5062660 update brings new ‘Windows Resilience’ features

​​Microsoft has released the KB5062660 preview cumulative update for Windows 11 24H2 with twenty-nine new features or changes, with many gradually rolling out, such as the new Black Screen of Death and Quick Machine Recovery tool. The KB5062660 update is part of the company’s optional non-security preview updates schedule, which releases updates at the end of each month to test new fixes and features coming to next month’s August Patch Tuesday. Unlike regular Patch Tuesday cumulative updates, monthly non-security preview updates do not include security updates and are optional. You can install the KB5062660 update by opening Settings, clicking on Windows Update, and then “Check for Updates.” Because this is an optional update, you will be asked if you want to install it by clicking the “Download and install” link unless you have the “Get the latest updates as soon as they’re they’re available” option enabled, which will cause the update to automatically install. KB5062660 preview update (BleepingCoputer) You can also manually download and install the KB5062660 preview update from the Microsoft Update Catalog. Windows 11 KB5062660 highlights Once installed, this optional cumulative release will update Windows 11 24H2 systems to build 26100.4770. The July 2025 preview update features many new additions that are gradually rolling out, including new Windows Resiliency Initiative features—the new Black Screen of Death and the Quick Machine Recovery tool. Microsoft’s Windows Resiliency Initiative is a new effort by Microsoft to make Windows more stable, self-healing, and faster to recover from critical failures. Windows 11 users can enable Quick Machine Recovery by navigating to Settings > System > Recovery > Quick Machine Recovery settings. Quick Machine Recovery settingsSource: BleepingComputer The complete list of changes is below. [Recall] New! Recall is now available in the European Economic Area (EEA). For more info, see Retrace your steps with Recall. In the EEA, Recall supports exporting snapshots to share with trusted third-party apps and websites. When saving snapshots is turned on for the first time, a unique Recall export code appears. This code is required to decrypt exported snapshots and is shown only once during initial setup. Microsoft doesn’t store or recover this code. To export, go to Settings > Privacy & Security > Recall & Snapshots > Advanced Settings and authenticate with Windows Hello. Choose to export past snapshots (from the last 7 days, 30 days, or all) or start a continuous export. Third-party apps can access exported snapshots only when both the export code and folder path are provided. If you lose or compromise the export code, reset Recall to generate a new one. New! For all Recall users worldwide, you can now reset Recall and delete all its data. Go to Settings > Privacy & Security > Recall & Snapshots to find a new advanced settings page. There, you’ll see a reset button that deletes all your snapshots and restores Recall to its default settings. [Click to Do] New! 1 Practice in Reading Coach is a new Click to Do text action that helps you improve reading fluency and pronunciation. Select text on your screen, choose Practice in Reading Coach, and read the text aloud. Reading Coach gives you feedback and shows where to improve. To use this feature, install the free Microsoft Reading Coach app from the Microsoft Store. New! 1 Read with Immersive Reader is a new text action in Click to Do that displays text in a focused, distraction-free environment. It helps improve reading and writing for all skill levels and abilities. You can adjust text size, spacing, font, and background theme, have text read aloud, break words into syllables, and highlight parts of speech. The picture dictionary shows images for unfamiliar words. To use this feature, install the free Microsoft Reading Coach application from the Microsoft Store. New! 1With the Draft with Copilot in Word text action, you can quickly turn any recognized text into a full draft. Whether it’s a sentence in an email or a snippet on your screen, press Win + Click on the recognized text, then select Draft with Copilot in Word. No more blank pages. No more writer’s block. Just momentum. To use “Draft with Copilot in Word” a Microsoft 365 Copilot subscription is required. New! 1Click to Do on Copilot+ PCs now supports actions through Microsoft Teams. When you select an email address recognized by Click to Do on your screen, you can choose to send a Teams message or schedule a Teams meeting. These options make it easy to ask a question or set up time to talk without interrupting your workflow. [Settings] New! 1The new agent in Settings is part of the Copilot+ PC experience and is designed to address one of the most common frustrations: finding and changing settings on your PC. You can describe what you need help with, such as “how to control my PC by voice” or “my mouse pointer is too small,” and the agent will suggest steps to resolve the issue. The agent uses AI on your PC to understand your request and, with your permission, can automate and complete tasks for you. This experience is rolling out to Snapdragon-powered Copilot+ PCs, with support for AMD and Intel™-powered PCs coming soon. It currently works only if your primary display language is set to English. New! On non-Copilot+ PCs, the Settings app now shows the Search box at the top center to make searching easier and more consistent. Fixed: If your PC is set to ‘Do nothing’ when you close the lid (under Settings > System > Power and Battery) and the Settings window is left open when you close the lid, reopening the lid might cause the Settings window to stop responding. It might become unresponsive to input or resizing and instead just display your accent color. Fixed: Settings might stop responding when you try to save Wi-Fi network credentials. [Windows Resiliency Initiative] The following changes are part of the Windows Resiliency Initiative announced at Ignite 2024: New! Quick machine recovery is now available. When enabled, it automatically detects and fixes widespread issues on Windows 11 devices using the Windows Recovery Environment (WinRE). This reduces downtime and avoids the need for manual fixes. If a device experiences a widespread boot issue, it enters WinRE, connects to the internet, and Microsoft can deliver a targeted fix through Windows Update. IT admins can enable or customize this experience

Windows 11 KB5062660 update brings new ‘Windows Resilience’ features Read More »

Scroll to Top