ContentSproute

us-technology

Met Police to double facial recognition use amid budget cuts

The UK’s largest police force is massively expanding its use of live facial recognition technology as it prepares to lose 1,700 officers and staff By Sebastian Klovig Skelton, Data & ethics editor Published: 01 Aug 2025 18:30 The Metropolitan Police will more than double its number of live facial recognition (LFR) deployments to cover the loss of 1,400 officers and 300 staff amid budget cuts. Detailing its restructuring plans – which also include bulking up the force’s protest-focused “public order crime” team and putting more officers on the beat – the Met said LFR will now be deployed up to 10 times a week across five days, up from the current rate of four deployments over two days. While the restructuring announcement noted 90 additional officers would be deployed to six “high-crime” zones – including Brixton, Kingston, Ealing, Finsbury Park, Southwark, and Spitalfields – it is unclear if these areas would also see a greater number of LFR deployments. The initiative follows the force warning in April 2025 that it faces a £260m budget shortfall for the coming year. Met commissioner Mark Rowley defended the move, saying the technology is used responsibly and only deployed to look for serious offenders. “We routinely put it out there and capture multiple serious offenders in one go, many of whom have committed serious offences against women or children, or people who are wanted for armed robbery,” he said. “It’s a fantastic piece of technology. It’s very responsibly used, and that’s why most of the public support it.” On the restructuring in general, Rowley added: “While our budget has decreased in real terms, we are using this additional [£32m] funding from City Hall and Home Office productively to support our mission to take a targeted approach to tackling volume crime and bolster our specialist tactics to disrupt the criminal gangs who fuel anti-social behaviour, robbery and theft.” Campaign group Liberty’s policy and campaigns officer Charlie Whelton said increasing LFR use was “incredibly concerning” given the lack of regulation for the technology. “Any tech which has the potential to infringe on our rights in the way scanning and identifying millions of people does needs to have robust safeguards around its use, including ensuring that proper independent oversight is in place,” he said. “The government must legislate now to regulate this technology, protect people’s rights, and make sure that the law on facial recognition does not get outpaced by the use.” In July 2025, home secretary Yvette Cooper confirmed for the first time that the UK government will seek to regulate police facial recognition by creating “a proper, clear governance framework”, citing police reticence to deploy systems without adequate rules in place. However, she declined to say if any new framework will be statutory. Ongoing concerns While the Met maintains that its deployments are intelligence-led and focus exclusively on locating individuals wanted for serious crimes, senior officers previously admitted to a Lords committee in December 2023 that the force selects images for its watchlist based on crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual. This includes those wanted for non-serious crimes such as shoplifting or traffic offences. Academic Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School, challenged the proportionality and necessity of this approach during the same Lords session, claiming the coercive power of the state means police must be able to justify each entry to the watchlists based on the specific circumstances involved, rather than their blanket inclusion via “crime types”. Critics have also raised concerns about the Met’s disproportionate use of LFR, in terms of watchlist sizes, faces scanned, and impacts on certain communities. Civil liberties group Big Brother Watch, for example, has repeatedly highlighted how the size of the Met’s LFR watchlist – which is now routinely exceeding 15,000 faces – indicates the deployments are not intelligence-led or targeted. Commenting in the wake of a February 2022 LFR deployment in Westminster, where the watchlist contained 9,756 images, Big Brother Watch director Silkie Carlo told Computer Weekly, “That’s not a targeted and specified deployment because of a pressing need – it’s a catch net.” According to data gathered by Green Party London Assembly member Zoë Garbett, over half of the Met’s 180 LFR deployments that took place during 2024 were also in areas where the proportion of Black residents is higher than the city’s average, including Lewisham and Haringey. While Black people comprise 13.5% of London’s total population, the proportion is much higher in the Met’s deployment areas, with Black people making up 36% of the Haringey population, 34% of the Lewisham population, and 40.1% of the Croydon population, where the Met is also planning to deploy permanent LFR cameras. Garbett added that while nearly two million people in total had their faces scanned across the Met’s 2024 deployments, only 804 arrests were made – a rate of just 0.04%. The Met said in July this year that since the start of 2024, more than 1,000 arrests have been made using LFR, 773 of which led to the individual being charged or cautioned. Similarly, while the Met claims its use of the technology is supported by the majority of the public, there have been instances where it has deployed LFR despite public opposition. In December 2024, for example, Computer Weekly revealed that, contrary to the force’s claim its LFR deployments in Lewisham are supported by the majority of residents and local councillors, there was minimal direct consultation with residents, while councillors clearly continued to express concerns about it. “What people support is safer streets and improved equity and community cohesion,” Green Lewisham councillor Hau-Yu Tam told Computer Weekly at the time. “They don’t necessarily support live facial recognition, which they’re not given the full rundown of, or they’re given very misleading information about.” In January 2023, Newham Council also unanimously passed a motion to suspend the use of LFR throughout the borough until biometric and anti-discrimination safeguards

Met Police to double facial recognition use amid budget cuts Read More »

Securing agentic identities focus of Palo Alto’s CyberArk buy

fgnopporn – stock.adobe.com Palo Alto Networks is entering the identity security space with a multibillion dollar acquisition, and plans to address growing concerns around protecting identities associated with AI agents. By Alex Scroxton, Security Editor Published: 01 Aug 2025 16:30 Palo Alto Networks has placed securing agentic artificial intelligence (AI) front and centre as it lines up a $25bn (£18.8bn) acquisition of identity security specialist CyberArk, marking its “formal entry” into the identity security space as it extends a multi-platform strategy. Palo Alto said the combination of CyberArk’s identity and privileged access management (PAM) expertise into its AI-backed cyber platform would extend identity protection across the board, protecting not merely humans and machines, but also autonomous artificial intelligence (AI) agents. “Our market entry strategy has always been to enter categories at their inflection point, and we believe that moment for identity security is now,” said Palo Alto’s chairman and CEO Nikesh Arora. “This strategy has guided our evolution from a next-gen firewall company into a multi-platform cyber security leader,” he said. “Today, the rise of AI and the explosion of machine identities have made it clear that the future of security must be built on the vision that every identity requires the right level of privilege controls, not the ‘IAM fallacy’. “CyberArk is the definitive leader in identity security with durable, foundational technology that is essential for securing the AI era,” said Arora. CyberArk founder and executive chairman Udi Mokady hailed a “profound moment” in the firm’s journey. “From the beginning, we set out to protect the world’s most critical assets, with a relentless focus on innovation, trust and security,” he said. “Joining forces with Palo Alto Networks is a powerful next chapter, built on shared values and a deep commitment to solving the toughest identity challenges. Together, we’ll bring unmatched expertise across human and machine identities, privileged access and AI-driven innovation to secure what’s next.” Convergence The acquisition rests on the premise that accounting for the widening spread of identities – meaning AI agents and workloads – will become an increasingly critical challenge for security teams in the near future. The two firms believe marrying their offerings will enhance and accelerate a new kind of combined cyber platform – a single solution to eliminate security gaps and simplify operations. They also hope to disrupt the legacy identity and access management (IAM) market, and as noted, address the gathering security concerns around agentic AI. “This machine identity move is significant, as it both ties into the agentic AI trend that Palo Alto is embracing and driving and there are an order of magnitude more machine identities than human identities,” Gartner vice-president analyst Charlie Winckless told Computer Weekly. “[The acquisition] also supports Palo’s efforts to grow their security platform and aligns with their message, especially if they tie machine identity to agentic AI systems that will require delegated identities, rather than just inheriting the permissions and identity of the human initiator,” he said. ‘Different animal’ Palo Alto Networks has long been an acquisitive company – more so since Arora took the reins back in 2018 – but typically, its buying habits have focused more on startups that enhance existing lines of business or fill in gaps in its platforms, said Winckless. The CyberArk acquisition stands out because it represents a large move into a different market, and opens up considerable horizontal growth to accelerate Palo Alto’s earnings. “CyberArk is a different animal, and comes with a different price tag and different expectations,” said Winckless. Read more on Identity and access management products Palo Alto Networks to acquire CyberArk for $25bn By: Aaron Tan Palo Alto Networks pushes platformization, AI for security By: John Grady Kyndryl expands SASE services with Palo Alto Networks By: Joe O’Halloran Inside CyberArk’s security strategy By: Aaron Tan Read More

Securing agentic identities focus of Palo Alto’s CyberArk buy Read More »

The blind spot: digital supply chain is now a board-level imperative

Many companies lack visibility into complex digital supply chains, meaning hidden risks and regulatory exposure. Cyber security requires continuous mapping and board engagement Many organisations still lack visibility into their digital supply chains, leaving serious vulnerabilities despite rising incidents and new regulations like NIS2, SEC rules, and DORA. Most companies will know who they’ve signed contracts with. But ask for a full list of every software dependency, API integration, cloud platform, or open-source library that handles sensitive data, and you’re met with silence. That silence is dangerous and points to a lack of due diligence and cyber hygiene control. That is because today’s supply chain is no longer a linear string of vendors; it’s a sprawling and complex ecosystem of services, platforms, and hidden interdependencies. When one of those links’ breaks, the damage doesn’t stop at the firewall. Just ask those caught in the fallout from SolarWinds, MOVEit, Log4j or even the CrowdStrike misconfiguration outage. In each case, a single compromise or misconfiguration rippled outward, impacting thousands of downstream businesses that didn’t have proper visibility of their supply chain vulnerabilities. Trusting your suppliers is one thing, knowing your risk exposure, potential impact, and resilience is entirely different. Despite the high-profile nature of these security or configuration incidents, many boards still underestimate supply chain cyber risk, or worse, assume it’s already under control or can be managed by contractual SLAs alone. It’s not a blame game though. This isn’t about complacency, it’s just a blind spot, and it is there because traditional risk models weren’t designed for modern complexities. Most organisations still treat third-party security as a procurement checkbox or annual audit exercise rather than what it truly is: a live, dynamic attack surface. What is complacent is thinking this is a small technical challenge that CISOs can quietly fix. On the contrary, it is a strategic threat to business continuity, customer trust, and regulatory compliance, but when managed well, it can become a business differentiator. The supplier ecosystem is much bigger than you think In cybersecurity, the term “supplier” has outgrown the contract. It now includes the SaaS platforms you rely on, the cloud infrastructure running behind the scenes, the open-source code embedded in your software, and the fourth-party vendors supporting your third-party vendors. It’s a digital chain of custody, and every link in that chain is a potential exposure point. The problem is that few organisations have a true understanding of their supplier ecosystem or have fully mapped the supply chain. They see the tip of the iceberg such as the signed agreements and the due diligence spreadsheets, but not the dependencies lurking just beneath the surface. This is often where traditional third-party risk programmes fall short. They focus on procurement, not proximity. Risk is usually measured in terms of who you buy from and the value of the transactions instead of who has access to systems, data, or customer information. And yet, it’s these hidden interdependencies that attackers exploit. A compromised API in a marketing tool; a vulnerability in a widely used open-source library; a cloud provider misconfiguration that leaves customer data exposed. These are recurring headlines. If you can’t see the full digital blast radius of your ecosystem, you can’t secure it. And if you can’t explain that risk in business terms, you won’t get the support needed to manage it. What the boardroom still doesn’t see For most boards, third-party risk is seen as the CISO’s responsibility, rather than a company-wide concern. That’s not because they don’t care; it’s because no one has translated the technical complexity into “impact” or consequences they can relate to. Boards don’t need a list of vendors or a rundown of which open-source components are being used in which systems. They need to know what happens if one of them fails. What’s the potential fallout and impact? How many customers are likely to be affected? What will the cost be in terms of downtime, trust, or compliance exposure? Until those answers are clear, ecosystem risk remains abstract, and to be fair to boards, “abstract” is hard to prioritise. So, security teams hit a wall. They’ve done the technical mapping, flagged the concerns, and run the assessments, but the message still doesn’t land. Why? Because it’s wrapped in language that hasn’t changed since it left the IT department. To make supply chain risk resonate at board level, it needs a story. A “what if” scenario grounded in the business’s actual operations. What if that small vendor supporting your invoicing system gets breached? What if the cloud provider running your analytics pipeline has an outage? What if the code library your product depends on gets hit with a zero-day? These are the conversations that move supply chain security out of the “nice to have” column and into the budget column. Regulation without borders Third party risk is now a matter of governance. Under frameworks like NIS2 and DORA, organisations are being held directly accountable for the cybersecurity posture of their digital supply chain. That includes suppliers, service providers, and in some cases, fourth parties. It’s not enough to run an annual assessment and file it away. These regulations demand continuous oversight, demonstrable due diligence, and, crucially, the ability to communicate risk exposure in a timely, transparent way. The financial penalties for non-compliance are hefty. For DORA, it’s up to €10 million or 2% of annual turnover depending on which is higher. But the reputational cost is also high. But here’s where things get a little tricky: the regulatory landscape isn’t uniform. Global organisations must navigate a patchwork of obligations, from the SEC’s cyber disclosure rules in the US to GDPR enforcement in the EU, and region-specific mandates in Asia-Pacific. One spreadsheet for each region, or one audit per year, isn’t going to cut it. The smart move is to build a unified risk posture that aligns to the spirit of these regulations, not just the letter. Start with impact: which suppliers could disrupt your business if compromised? Which dependencies expose customer data

The blind spot: digital supply chain is now a board-level imperative Read More »

Ministry of Justice unveils strategy for safe, secure AI

A new chief AI officer, together with a framework co-developed with the Alan Turing Institute, form part of a three-year AI action plan at the MoJ By Cliff Saran, Managing Editor Published: 01 Aug 2025 11:45 The Ministry of Justice (MoJ) has hired a chief AI officer as part of a three-year action plan to deploy artificial intelligence (AI) across the justice system. The action plan includes setting up the Justice AI Unit, an interdisciplinary team comprising experts in AI, ethics, policy, design, operations and change management.  The plan includes ensuring a secure supply chain for AI software and a set of targeted data initiatives to improve quality, governance, interoperability and infrastructure. Among the key requirements is having a single, consistent ID for each offender, which the MoJ regards as critical to making better-informed decisions across the justice journey. As an example of the work that has been done so far, the MoJ said it is building a real-time system linking offender data across different agencies. This is based on Splink, an open-source data linking tool developed by MoJ data scientists. Splink applies explainable machine learning to deduplicate records and ensure accuracy. According to the MoJ, this single view will reduce admin burden, support better decision-making, and enable more advanced AI tools to enhance public safety and rehabilitation outcomes.  “We will embed AI solutions securely and collaboratively across our department and agencies, ensuring we maximise AI’s potential while maintaining public trust and transparency,” said James Timpson, who leads the MoJ’s AI initiative. MoJ has also set up an AI Steering Group that brings together senior leaders from across the Ministry of Justice, including policy, data, digital, security, people, legal, HM Prison & Probation Service (HMPPS), HM Courts & Tribunals Service (HMCTS), risk, and communications, to oversee AI initiatives and manage risks. The steering group covers ethical, security and operational standards, and also manages the departmental AI risk register. In the AI action plan for justice policy document, Timpson said the steering group would provide a regular forum for reviewing progress and resolving issues A new AI hub, ai.justice.gov.uk is being established to serve as a central point of engagement. This will be used to provide regular updates on what the MoJ is looking at in terms of AI, covering pilots and scaling. “Where appropriate, we will open source our tools and solutions to promote reuse and collaboration,” it said. Working with the Alan Turing Institute, the MoJ has developed a publicly accessible AI and Data Science Ethics Framework, which, according to policy paper, offers “a practical toolkit to guide developers, policymakers and decision-makers from inception through to deployment”. The framework is built around five core principles: sustainability, accountability, fairness, explainability and data responsibility. “These principles underpin our broader AI adoption approach, and we will now scale up the use of this framework, ensuring it is consistently applied by all internal teams working with AI,” the policy paper states. The MoJ said it places a high priority on privacy and security, and is working with its data protection and cyber security specialists. It added that it would continue to meet and exceed legal and regulatory standards, including compliance with the General Data Protection Regulation, government security requirements, regular privacy audits, robust access controls and staff training. Recognising that AI models must be monitored, re-trained and improved over time, the MoJ said it is working with HM Treasury and the Department for Science Innovation and Technology on sustainable funding models that support the running costs of digital services over several years. Read more on Artificial intelligence, automation and robotics UK MoJ crime prediction algorithms raise serious concerns By: Sebastian Klovig Skelton UK Ministry of Justice modernises wired and wireless digital infrastructure By: Joe O’Halloran PAC slams HMCTS justice system transformation programme By: Lis Evenstad Prison and Probation Service’s tagging ambition ‘outstripped its ability to deliver’ By: Karl Flinders Read More

Ministry of Justice unveils strategy for safe, secure AI Read More »

AI-enabled security pushes down breach costs for UK organisations

natali_mis – stock.adobe.com Organisations that are incorporating AI and automation into their cyber security practice are seeing improved outcomes when incidents occur, according to an IBM study By Alex Scroxton, Security Editor Published: 30 Jul 2025 15:52 British organisations that have incorporated artificial intelligence (AI)-enabled solutions into their cyber security stack appear to be reaping the rewards of automation from a cost perspective at least, as data breach costs drop by hundreds of thousands of pounds. This is according to the UK-specific cut of IBM’s latest annual Cost of a data breach report, released this week, which found that even though less than one-third of UK organisations have deployed AI-enhanced security, overall average data breach costs for those that have came in at £3.11m per annum, compared to £3.78m for those that had not. The 2025 report, compiled on IBM’s behalf by the Ponemon Institute, surveyed more than 600 organisations and interviewed around 3,500 people worldwide that had experienced a breach in the period between March 2024 and March 2025. Approximately 8% of respondents are UK-based. Elaine Hanley, partner at IBM cyber security services for the UK and Ireland, described AI as a massive benefit to defenders: “Organisations that are using AI-based threat detection and threat response are massively more effective than organisations that aren’t. But the negative side is that attackers are using AI. It’s a race where you’ve got threat actors using AI and being much more effective with it, then you’ve got the defenders at the organisation using AI to spot that faster.” The IBM survey found that UK organisations making use of security AI and automation are able to identify and contain cyber attacks much quicker. Its data reveal that mean time to identify (MTTI) a breach at an AI-powered organisation was 148 days, and mean time to contain (MTTC) was 42 days, down from 168 and 64 days at organisations relying on traditional methods. Running to catch up The benefits of AI-powered security may be evident, but IBM also found that UK organisations are struggling to keep up when it comes to implementing AI-specific security policies. For example, 63% of UK-based respondents said they did not have AI access controls in place to reduce the risks associated with potential cyber attacks against AI models or applications. Only 31% of UK-based respondents had governance policies in place to properly manage wider unsanctioned use of so-called shadow AI by their staff. “IBM’s report shows a clear trend that AI technologies continue to be a great tool, not just for productivity but also for security purposes,” said Matthew Evans, chief operating officer and director for markets at TechUK. “However, AI alone is not the answer – as data breaches become faster and smarter, people and organisations need the proper tools and skills to use AI in the right way to protect themselves. Lifelong learning in the form of courses, training, and certifications can make the difference in supporting organisations and their employees in protecting themselves from costly data breaches,” he said. DevSecOps, SIEM, as important as AI But this is not to say that AI is the only significant investment that defenders need to be making. The report also outlined that organisations paying proper attention to best practice around DevSecOps saw similar impacts to their breach costs, while spending security analytics and security information and event management (SIEM) also had an effect, although a slightly less valuable one. Breach costs were pushed up at organisations that were experiencing large-scale use of shadow AI technology. Those that had more complexity in their overall security stack, and those that were failing to properly account for risks arising through their supply chains, were also seeing increased costs. Among surveyed UK organisations, third-party supplier and supply chain compromises were the most commonly identified breach causes, ahead of phishing and credential theft. “It’s not just about how good your security is,” said Hanley. “You need to look at third-party risk management and look at all the people that you’re interacting with digitally, and make sure that they care as much as you do about security.” Worldwide findings More widely, the IBM report found that global average costs are falling in line with the UK, down to $4.44m (£3.32m) on average, the first decline since 2020. There were other encouraging trends to emerge in the data. For example, more organisations are now feeling empowered to push back against ransomware demands, with 63% opting not to pay compared to 59% last year. However, perhaps more worryingly, the IBM data also reveal that post-breach investment plans seem to be stalling – with only 49% of breached respondents saying they planned to spend more on cyber security, compared to 63% last year. Read more on Data breach incident management and recovery Government punts cyber governance code of practice for UK businesses By: Brian McKenna IBM: Data breach cost in ASEAN hits new high By: Aaron Tan UK and Singapore to collaborate on supporting ransomware victims By: Alex Scroxton Breach costs soar as record ransomware payment made By: Alex Scroxton Read More

AI-enabled security pushes down breach costs for UK organisations Read More »

Cerebras Code

We are launching two new plans designed to make AI coding faster and more accessible: Cerebras Code Pro ($50/month) and Code Max ($200/month). Both plans give you access to Qwen3-Coder, the world’s leading open-weight coding model—running at speeds of up to 2,000 tokens per second, with a 131k-token context window, no proprietary IDE lock-in, and no weekly limits! Cerebras Makes Code Generation Instant Even with the best frontier models, you still end up waiting around for completions. And as coding workflows get more agentic, the latency adds up fast. You’re not just waiting once. You have to wait on every LLM call across multi-step edits, tool use, retries, and planning. At 2,000 tokens per second, code generation becomes instant. And starting at $50/month, anyone can use Cerebras Code and enjoy fast code generation that keeps you in flow. Powered by a Frontier Model Qwen3‑Coder is Alibaba’s flagship coding agent model. The 480B parameter model delivers performance comparable to Claude Sonnet 4 and GPT‑4.1 in coding and agentic tasks, achieving leading performance on coding benchmarks such as Agentic Coding, Agentic Browser-Use, and BFCL. Bring your own AI IDE If your code editor or tool supports OpenAI compatible inference endpoints, you can use it with Cerebras Code. Plug Cerebras Code into anything – Cursor, Continue.dev, Cline, RooCode, or whatever else you’re using. No extra setup. Just instant, high quality code generation inside your own workflow. Available now Cerebras Code Pro – ($50/month) Qwen3-Coder access with fast, high-context completions. Send up to 1,000 messages per day—enough for 3–4 hours of uninterrupted vibe coding. Ideal for indie devs, simple agentic workflows, and weekend projects. Cerebras Code Max – ($200/month) Qwen3-Coder access for heavy coding workflows. Send up to 5,000 messages/day. Ideal for full-time development, IDE integrations, code refactoring, and multi-agent systems. Cerebras Code Pro and Code Max are available today, no waitlist. Sign up, bring your key to your favorite editor, and start building instantly. Read More

Cerebras Code Read More »

Coffeematic PC – A coffee maker computer that pumps hot coffee to the CPU

Sometime during winter 2024, I found myself at a thrift store. I was staring at rows of appliances, wrapped in plastic and clinging to life, trying to answer one question: which of these is the right chassis for a retro gaming computer? Driving home, I took corners carefully, checking that the General Electric (GE) drip coffee maker I’d chosen was safe in the backseat. The coffee maker’s given name was Coffeematic. Circa 1980, it is boxy yet athletic – unfazed by any considerations of future internet connectivity. Best, it is perfect for being hacked. Coffeematic is now Coffeematic PC – part gaming computer, part coffee maker. A newly synthesized machine percolating processes well beyond its original configuration. Coffeematic PC is part of a lineage of coffee maker computers made since 2002. I’ll describe that fascinating lineage here, and how it inspired an art exhibition called Sparklines where hand-drafted data visualizations accompany Coffeematic PC. Profound and poetically articulated. Elegant and assertive. Highly scaleable with dynamic acceleration. No. These do not describe Coffeematic PC or its peers (one of those phrases describes a bottle of wine.) A custom built computer can be basic and functional, or an elaborate, absurd, spinning piece of art. Coffeematic PC falls somewhere in that spectrum while also being nearly self-destructive. This is how Coffeematic PC works. The computer is fully functional. The coffee maker is too, it percolates Java like a regular coffee maker. Very hot Java. Computers usually use fans or liquid cooling systems to reduce heat. Coffeematic PC uses the hot Java it brews to heat? cool? caffeinate? the computer. A pump takes the hot, caffenated slurry (~90C/194F) and circulates it thru two radiators sitting on top of Coffeematic PC’s crown -> down to a central processing unit (CPU) tucked within an ASUS M2NPV-VM motherboard snugly strapped to Coffeematic PC’s back. Java continues through an artery returning to Coffeematic PC’s caraffe. The process repeats until Java is integrated with the user or the machine is powered off. ↑ Coffeematic PC has a dedicated pump to aggressively dispense Java for user. CPU’s are meant to be cool and Java hot. Despite circulating hot Java, Coffeematic PC does not crash. To understand more, I wrote command line code to gather data on Coffeematic PC every 5 seconds, and monitored Coffeematic PC for 75 minutes. The graph below shows the results. The machine is just barely non-destructive. Coffeematic PC’s CPU, body, and circulatory system eventually find equilibrium. A warm 33C/91F – amazingly close to the temperature of the slurry that flows through you and me. An important part of this project is the lineage of coffee maker computers. Before discussing that, this is how Coffeematic PC was made. The build is a mix of discarded electronics and newly purchased hardware, pumps, and radiators. The motherboard, CPU, RAM, and graphics card are from the mid 2000’s and were sourced from a recycling center. This is a parts list for Coffeematic PC. GE Coffeematic Coffee Maker 10 Cup ASUS M2NPV-VM AM2 Motherboard AMD Athlon II X4 640 3 GHz Quad-Core OEM/Tray Processor Hynix 1GB 2Rx8 PC2-5300U-555-12 PC2-DDR2 RAM Acer SA100 240 GB 2.5″ Solid State Drive HIS H467QR1GH Radeon HD 4670 1 GB Video Card Antec Earthwatts Green 430 W 80+ Bronze Certified ATX Power Supply Linux Mint Operating System CPU Water Cooling Block for Intel Water Cooling Computer Radiator 12V Mini Food Grade Self Priming Diaphragm Fresh Water Transfer Pump Waterproof Toggle Switch 12V Brass Hose Barb 3/8″ to 3/16″ Brass Hose Barb, 5/16″ to 3/16″ 90 Degree Elbow Hose Barb 3/16″ 90 Degree Elbow Hose Barb 3/8″ 10mm 90 Degree Elbow Hose Barb 5/16″ 8mm Food Grade Silicon Tubing 3/16″ ID x 5/16″ OD Food Grade Vinyl Tubing 5/16″ ID – 7/16″ OD I spent about a month designing and building Coffeematic PC with the help from my beautiful fiance. The build traverses time. The coffee maker is from the late 1970’s, the motherboard, CPU, and graphics card from the 2000’s, and the SSD, operating system, and hardware from the today’s (2020’s). The General Electric coffee maker needed only a minor repair of replacing a small vinyl tube that had cracked. It takes awhile to brew a pot of coffee, but once it is brewed… it tastes like coffee made from a plastic coffee maker from the 1970’s. I’lllll drink it! A few clips of how Coffeematic PC was built. Watch on YouTube The lineage of coffee maker computer builds spans 22 years with a curious 15 year gap in the middle. I’m not the first person to synthesize a coffee maker and a computer. But, I think I am the first to use hot Java as a cooling method. The graph below shows the lineage of coffee maker computers. There are a total of 5. In 2002 Nick Pelis built the first ever coffee maker computer named The Caffeine Machine. Then, the builds went cold for 15 years until 2018, when a person named Ali “THE CRE8OR” Abbas collaborated with a company named Zotac to make the Zotac Mekspresso to feature in a trade show. One year after in 2019, a man whose username is Logarythm made the Mr. Coffee PC. This unassuming build is perhaps my favorite. 5 years later, after COVID-19, NerdForge, a youtube channel specializing on fun builds, built a “PC that makes coffee”. During this time I was making Coffeematic PC. Why is there a 15 year gap between the first coffee maker computer and the rest? Were people tired of drinking coffee? I don’t think so. We’re people tired of building fun computers? Were they distracted? Could they not afford it? I’m not sure. But something is wrong. There should be a steady output of absurd coffee maker computers being made. What happened in those 15 years? To look into it I created the graph above. It shows a timeline of coffee maker computers along with important events compiled from the The Timeline of Computer History from the Computer History Museum.

Coffeematic PC – A coffee maker computer that pumps hot coffee to the CPU Read More »

JSON is not a YAML subset (2022)

People on the internet believe that JSON is a subset of YAML, and that it’s safe to parse JSON using a YAML parser: Following this advice will end badly because JSON is not a subset of YAML. It is easy to construct JSON documents that (1) fail to parse as YAML, or (2) parse to valid but semantically different YAML. The second case is more dangerous because it’s difficult to detect. False has over “1.7e3” named fjords YAML (infamously) allows string scalars to be unquoted. A conforming YAML parser, presented with a token known to contain a scalar value, must match that token against a set of patterns and then fall back to treating it as a string. This behavior produces surprising outcomes, and has been named The Norway Problem. >” output-prefix=”@”> @$ irb-3.1.2 require ‘json’ @=> true require ‘yaml’ @=> true JSON.load ‘{“a”: 1e2}’ @=> {“a”=>100.0} YAML.load ‘{“a”: 1e2}’ @=> {“a”=>”1e2”} YAML 1.2 won’t save you YAML 1.2 is a revision to the YAML spec that (among other goals) aims to make YAML a proper superset of JSON. To maintain backwards compatibility with existing YAML documents, the version is specified in a %YAML directive. — a: 1e2 # document[“a”] == “1e2” b: no # document[“b”] == false %YAML 1.2 — a: 1e2 # document[“a”] == 100 b: no # document[“b”] == “no” Regardless of whether YAML 1.2 has been (or will be) widely adopted, it does not help those who want to parse a JSON document with a YAML parser. JSON documents do not start with %YAML, and therefore cannot opt-in to the YAML parser behavior that would permit correct parsing of JSON. Read More

JSON is not a YAML subset (2022) Read More »

Researchers map where solar energy delivers the biggest climate payoff

Using advanced computational modeling, a Rutgers professor, in collaboration with researchers from the Harvard T.H. Chan School of Public Health and Stony Brook University, reveal both the immediate and delayed climate benefits of solar power Increasing solar power generation in the United States by 15% could lead to an annual reduction of 8.54 million metric tons of carbon dioxide emissions, according to researchers at Rutgers, the Harvard T.H. Chan School of Public Health and Stony Brook University. The study, published in Science Advances, found that the climate benefits of solar power differ markedly throughout U.S. regions, pinpointing where clean energy investments return the greatest climate dividends. In 2023, 60% of U.S. electricity generation relied on fossil fuels, while 3.9% came from solar, according to the U.S. Energy Information Administration. Because fossil fuel-generated electricity is a leading source of carbon dioxide, or CO2, and harmful air pollutants such as fine particulate matter, expanding solar could not only mitigate CO2 but help reduce illness, hospitalizations and premature deaths linked to air pollution exposure. From a computer science perspective, this study demonstrates the power of harnessing large-scale, high-resolution energy data to generate actionable insights. Arpita Biswas Assistant Professor, Department of Computer Science, Rutgers School of Arts and Sciences Researchers examined five years of hourly electricity generation, demand and emissions data from the Energy Information Administration starting July 1, 2018. They focused on the 13 geographic regions in the United States. With this dataset, the researchers constructed a statistical model to explore how increases in hourly solar energy generation would affect CO2 emissions within a given region and in its neighboring regions. The study quantified both immediate and delayed emissions reductions resulting from added solar generation. For example, the researchers found that in California, a 15% increase in solar power at noon was associated with a reduction of 147.18 metric tons of CO2 in the region in the first hour and 16.08 metric tons eight hours later. “It was rewarding to see how advanced computational modeling can uncover not just the immediate, but also the delayed and far-reaching spillover effects of solar energy adoption,” said the lead author Arpita Biswas, an assistant professor with the Department of Computer Science at the Rutgers School of Arts and Sciences. “From a computer science perspective, this study demonstrates the power of harnessing large-scale, high-resolution energy data to generate actionable insights. For policymakers and investors, it offers a roadmap for targeting solar investments where emissions reductions are most impactful and where solar energy infrastructure can yield the highest returns.” The researchers said their methods provide a more nuanced understanding of system-level impacts from solar expansion than previous studies, pinpointing where the benefits of increased solar energy adoption could best be realized. In some areas, such as California, Florida, the mid-Atlantic, the Midwest, Texas and the Southwest, small increases in solar were estimated to deliver large CO2 reductions, while in others, such as New England, the central U.S., and Tennessee, impacts were found to be minimal – even at much larger increases in solar generation. In addition, the researchers said their study demonstrates the significant spillover effects solar adoption has on neighboring regions, highlighting the value of coordinated clean energy efforts. For example, a 15% increase in solar capacity in California was associated with a reduction of 913 and 1,942 metric tons of CO2 emissions per day in the northwest and southwest regions, respectively. “I am very excited about this study because it harnesses the power of data science to offer insights for policymakers and stakeholders in achieving CO2 reduction targets through increased solar generation,” said Francesca Dominici, director of the Harvard Data Science Initiative and Clarence James Gamble Professor of Biostatistics, Population and Data Science and a corresponding author of the study. Explore more of the ways Rutgers research is shaping the future. Read More

Researchers map where solar energy delivers the biggest climate payoff Read More »

Today’s NYT Connections: Sports Edition Hints and Answers for Aug. 2, #313

Here are hints and the answers for the NYT Connections: Sports Edition puzzle No. 313 for Saturday, Aug. 2. Freelance writer Amanda C. Kooser covers gadgets and tech news with a twist for CNET. When not wallowing in weird gear and iPad apps for cats, she can be found tinkering with her 1956 DeSoto. Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles. If you know which team is the “New York Yankees of Japan,” you’ll have a good shot at conquering the purple category for today’s Connections: Sports Edition. Stuck? Read on for hints and the answers. Connections: Sports Edition is out of beta, making its debut on Super Bowl Sunday, Feb. 9. That’s a sign that the game has earned enough loyal players that The Athletic, the subscription-based sports journalism site owned by the Times, will continue to publish it. It doesn’t show up in the NYT Games app, but now appears in The Athletic’s own app. Or you can continue to play it for free online. Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta Hints for today’s Connections: Sports Edition groups Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group. Yellow group hint: Unsung heroes. Green group hint: This is company. Blue group hint: NCAA bigwigs. Purple group hint: Towering. Answers for today’s Connections: Sports Edition groups Yellow group: Workers at a stadium. Green group: Used to describe a 3-pointer. Blue group: Men’s college basketball coaches. Purple group: _____ Giants. Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words What are today’s Connections: Sports Edition answers? The completed NYT Connections: Sports Edition puzzle for Saturday, Aug. 2, 2025. NYT/Screenshot by CNET The yellow words in today’s Connections The theme is workers at a stadium. The four answers are concession staff, grounds crew, security and usher. The green words in today’s Connections The theme is used to describe a 3-pointer. The four answers are 3, beyond the arc, downtown and trey. The blue words in today’s Connections The theme is men’s college basketball coaches. The four answers are Few, Oats, Painter and Pope. The purple words in today’s Connections The theme is _____ Giants. The four answers are Little, New York, San Francisco and Yomiuri. Read More

Today’s NYT Connections: Sports Edition Hints and Answers for Aug. 2, #313 Read More »

Scroll to Top