ContentSproute

Finance

Japan’s Slow Crypto Approval Process Holds Back Innovation, Says WeFi CEO

Maksym Sakharov, CEO of decentralized bank WeFi, says Japan’s slow and cautious regulatory process is the main reason Web3 startups and crypto funds are moving overseas—not the high taxes. Japan requires a two-step approval from the Japan Virtual and Crypto Assets Exchange Association and the Financial Services Agency, which can take 6 to 12 months

Japan’s Slow Crypto Approval Process Holds Back Innovation, Says WeFi CEO Read More »

Did Galaxy Digital Just Sell Stolen Bitcoin? CryptoQuant CEO Raises Big Questions

A quiet $9 billion Bitcoin sale has now turned into a major controversy and it could be tied to one of the earliest crypto exchange hacks on record. This week, Galaxy Digital confirmed it sold 80,000 Bitcoin (worth over $9.4 billion) through over-the-counter (OTC) deals. But CryptoQuant CEO Ki Young Ju says these coins may

Did Galaxy Digital Just Sell Stolen Bitcoin? CryptoQuant CEO Raises Big Questions Read More »

Op-Ed: Australia’s AU$19B Tokenisation Gamble Needs Open Rails

Australia’s Project Acacia is a bold stress test of digital finance. Canberra has cleared fourteen institutions, among them the ANZ, CBA, and Westpac, to move real money across tokenised bonds, private-market funds, and even a wholesale central-bank digital currency (CBDC).  Officials tout an annual upside of AU$19 billion, but that upside hinges on a single design choice: whether the rails remain open and interoperable or become fenced inside permissioned blockchains. Permissioned Walls Shrink the Opportunity Closed ledgers may reassure compliance teams, but they reintroduce the gatekeeping that blockchain was designed to eliminate. Permissioned networks restrict transaction validation and smart contract deployment to pre-approved participants, reducing innovation to the speed of committee decisions. New entrants, whether start-ups or rural co-ops, must ask permission to compete. Worse, data silos fragment liquidity. A tokenised carbon credit on one consortium chain cannot natively trade against a tokenised bond on another. This forces costly bridges or off-chain workarounds that undermine the very efficiencies Project Acacia is meant to demonstrate. Lessons from Open Ecosystems Abroad Europe offers a live counter-example. Under the EU’s MiCA regime and sandbox exemptions, firms are issuing tokenised commercial paper and structured notes directly on public EVM-compatible chains. Compliance is enforced not at the token level alone, but across the full stack: smart contract logic, settlement infrastructure, KYC-linked registries, and regulated orchestration frameworks. These programmes rely on open, chain-agnostic standards that interoperate with token primitives like ERC-20, ERC-721, and ERC-1155, while enabling additional control logic as needed. EIP 7943, the latest proposed standard for real-world asset tokenisation, extends this model by defining a modular compliance architecture. It cleanly separates token mechanics from proprietary infrastructure, preserving legal neutrality and enabling cross-chain interoperability. Regulation should define outcomes, not enforce architecture. Issuers must be free to adopt infrastructure that satisfies compliance requirements without being bound to a specific protocol, vendor, or permissioned system. Design-layer flexibility is essential to unlock scalable, cross-border capital formation. Related: Australia’s Crypto Moment: Why AUD Stablecoins Matter SMEs Need Unrestricted Rails Australia’s digital asset roadmap risks reinforcing incumbent advantage at the moment small businesses could benefit most. Tokenisation enables farm cooperatives to fractionalise grain receivables, property developers to pre-sell equity tranches, renewable energy start-ups to securitise future cash flows, and Indigenous community trusts to unlock dormant land value. These issuers do not have the lobbying power to join a bank-led consortium chain. On permissionless infrastructure, however, smart contract tooling and wallet abstraction make capital formation nearly as accessible as launching an online storefront. Regulatory safeguards such as transfer restrictions and investor limits can still be enforced at the token or protocol layer without centralised gatekeepers. Open Standards Instead of Bespoke Pipes The path forward is clear: adopt a standards-first approach that treats the ledger as a modular, interchangeable component. A baseline real-world asset (RWA) token standard enables any licensed entity to build atop a shared compliance layer without fragmenting liquidity or duplicating infrastructure. Consortia may still operate private sub-networks for sensitive workflows, but final settlement and secondary trading should occur on public, auditable infrastructure accessible to all qualified participants. Open rails reduce systemic friction and ensure that access is governed by compliance, not gatekeeping. Making Australia Globally Competitive Project Acacia’s six-month pilot should therefore measure success against four criteria: Interoperability by default RegTech integrated into the asset SME access and on-ramps Open, verifiable auditability If these conditions are met, Australia can position itself to capture compounding global liquidity rather than watch it consolidate in jurisdictions like Singapore. Critics argue that permissionless chains pose heightened risks of exploitation and illicit finance. But with smart contract audits, on-chain analytics, and zero-knowledge proofs already in production, these risks can be mitigated without sacrificing openness. The greater threat is failing to allow network effects to emerge. On open infrastructure, every new issuer or liquidity pool strengthens the system. On permissioned networks, each participant adds governance overhead and requires bespoke integration, limiting scale by design. Related: AFR: Carnegie Opens Crypto Gate for Super Funds and Family Offices Choose Fibre, Not Copper We’ve seen this movie before. When the National Broadband Network leapfrogged copper for fibre, it positioned Australia for a data-hungry century. Project Acacia now faces a similar crossroads. It can either enshrine yesterday’s financial hierarchies in digital form or set the global standard for an inclusive, interoperable marketplace. The AU$19 billion prize will go to the model that scales. History shows that the model is open. Edwin Mata, CEO & Co-Founder of Brickken LinkedIn I X This article reflects the author’s personal commentary and should be read as opinion. Read More

Op-Ed: Australia’s AU$19B Tokenisation Gamble Needs Open Rails Read More »

AI governance gaps: Why enterprise readiness still lags behind innovation

Opinion Jul 25, 20256 mins Data GovernanceIT GovernanceIT Governance Frameworks Enterprises are racing to deploy AI — but without real governance, they’re flying blind and putting the whole ecosystem at risk. As generative AI moves from experimental hype to operational reality, navigating the balance between innovation and governance is becoming a real challenge for enterprises. It’s why my company, Pacific AI, in collaboration with Gradient Flow, set out to better understand the state of AI and responsible AI with our first AI Governance Survey. And the results highlight a concerning trend: While enthusiasm for AI is high, organizational readiness is lagging.  The data highlights significant disparities in governance maturity, especially between small firms and large enterprises, and underlines the urgent need for leadership to embed governance into the foundation of AI development. But to build safer, more resilient AI systems, we need to first understand the current governance gaps and how they trickle into AI development and use. Cautious adoption, limited maturity  Despite the media buzz and strategic urgency surrounding generative AI, only 30% of organizations surveyed have moved beyond experimentation to deploy these systems in production. Just 13% manage multiple deployments, with large enterprises being five times more likely than small firms to do so. This measured approach underscores a broader trend: most companies are in exploration mode, seeking to understand where AI can drive value before committing to widespread rollout.  But the cautious pace hasn’t eliminated risk. Nearly half (48%) of companies fail to monitor production AI systems for accuracy, drift, or misuse — basic governance practices critical to ensuring safe operations. Among small companies, this drops to a troubling 9%, highlighting how resource constraints and limited expertise can compound risk in less mature environments. Speed vs. safety  The top barrier to effective AI governance isn’t regulatory uncertainty or technical complexity — it’s pressure to move fast. Nearly half (45%) of respondents cited speed-to-market demands as the primary obstacle to better governance. For technical leaders, that figure jumps to 56%, reflecting their dual role as both innovation drivers and risk managers.  This finding underscores a common business hurdle: Governance is often perceived as slowing progress. But actually,  robust governance structures can accelerate responsible deployment. Without frameworks for incident response, risk evaluation and model monitoring, technical teams are more likely to encounter production issues that stall deployment and damage trust. Usage policies don’t mean governance readiness  While 75% of organizations report having AI usage policies, fewer than 60% have dedicated governance roles or incident response playbooks. These numbers reveal a policy-practice disconnect: companies may be documenting rules without operationalizing them. Among small firms, the gaps are even wider— only 36% have governance officers and 41% offer annual AI training.  This discrepancy suggests that many organizations are treating governance as a box to check, rather than a core capability. Enterprise leaders must recognize that formal policies are just the beginning. Without embedding governance into workflows, assigning clear accountability and resourcing AI oversight, the risks will outpace the controls. There’s a leadership divide  The survey also highlights a notable divide in ambition and preparedness between technical leaders and their peers. Technical leaders are nearly twice as likely to be targeting three to five generative AI use cases in the next year. They are more likely to lead hybrid build-and-buy strategies and to oversee production deployments. Yet they also face the highest governance pressures, report lower training rates for their teams and encounter unique blind spots — such as limited use of tools for AI incident reporting.  For enterprise CTOs, VPs, and engineering managers, the takeaway is clear: leading AI adoption requires more than technical expertise. It demands intentional governance planning, alignment with risk and compliance teams and a proactive approach to monitoring, accountability and user impact. Small firms: The governance gap is a systemic risk  Perhaps the most concerning finding is the governance vulnerability of small firms. These organizations are significantly less likely to monitor AI systems, establish governance roles, conduct training or understand emerging regulatory frameworks. Only 14% report familiarity with major standards like the NIST AI Risk Management Framework.  In a distributed technology ecosystem, where even small startups can build and deploy powerful models, these weaknesses create systemic risk. AI failures don’t stay isolated—they can damage customers, trigger legal liabilities and prompt regulatory responses that affect the broader industry.  Enterprise leaders — especially those at larger firms — should consider collaborative approaches to uplift the governance capacity of smaller partners, vendors and affiliates. Industry-wide knowledge-sharing, tools and governance benchmarks could reduce collective exposure. Shifting perspectives on governance   The organizations most successfully deploying generative AI are those treating governance not as a setback, but as a performance enabler. These companies integrate monitoring, risk evaluation and incident response into their engineering pipelines. They build automated checks that prevent deployment of under-tested models and treat AI failures as inevitable, and prepare accordingly. Essentially, they’re playing the long game with the safety and efficacy of their AI systems.   What this looks like is AI being owned by product, engineering and AI development groups — not just technical teams. By instrumenting observability into AI systems, establishing clear chains of responsibility, and training teams proactively, organizations can reduce risk and accelerate delivery the right way. Takeaways for enterprise leaders  Make governance a priority from the start. Elevate AI governance to a strategic priority, not an afterthought. Assign dedicated leadership, define cross-functional ownership and ensure governance goals are tied to business outcomes. Embed monitoring and risk evaluation in DevOps. Treat governance controls, like monitoring for model drift or prompt injection vulnerabilities, as non-negotiable parts of your AI deployment pipeline. Close the training and awareness gap. Expand AI literacy training across roles, especially for technical teams, and ensure familiarity with key frameworks like NIST AI RMF, ISO standards and emerging regulations. Prepare for failure with robust incident response. Go beyond traditional IT playbooks. Develop AI-specific response protocols that address bias, misuse, data leakage and malicious manipulation, and assign leaders to carry out these functions. Support the

AI governance gaps: Why enterprise readiness still lags behind innovation Read More »

Designing for humans: Why most enterprise adoptions of AI fail

AI won’t save your business if no one trusts it. Ditch the hype, fix the culture and stop choking innovation with red tape. Building technology has always been a messy business. We are constantly regaled with stories of project failures, wasted money and even the disappearance of whole industries. It’s safe to say that we have some work to do as an industry. Adding AI to this mix is like pouring petrol on a smouldering flame — there is a real danger that we may burn our businesses to the ground. At its very core, people build technology for people. Unfortunately, we allow technology fads and fashions to lead us astray. I’ve shipped AI products for more than a decade — at Workhuman and earlier in financial services. In this piece, I will take you through hard-earned lessons I’ve learned through my journey. I have laid out five principles to help decision-makers — some are technical, most are about humans, their fears, and how they work. 5 principles to help decision makers The path to excellence lies in the following maturity path: Trust → Federated innovation →  Concrete tasks → Implementation metrics → Build for change. 1. Trust over performance Companies have a raft of different ways to measure success when implementing new solutions. Performance, cost and security are all factors that need to be measured. We rarely measure trust. Unfortunate, then, that a user’s trust in the systems is a major factor for the success of AI programs. A superb black-box solution dies on arrival if nobody believes in the results. I once ran an AI prediction system for US consumer finance at a world-leading bank. Our storage costs were enormous. This wasn’t helped by our credit card model, which spat out 5 TB of data every single day. To mitigate this, we found an alternative solution, which pre-processed the results using a black-box model. This solution used 95% less storage (with a cost reduction to match). When I presented this idea to senior stakeholders in the business, they killed it instantly. Regulators wouldn’t trust a system where they couldn’t fully explain the outputs. If they couldn’t see how each calculation was performed every step of the way, they couldn’t trust the result. One recommendation here is to draft a clear ethics policy. There needs to be an open and transparent mechanism for staff and users to submit feedback on AI results. Without this, users may feel they cannot understand how results are generated. If they don’t have a voice in changing ‘wrong’ outputs, then any transformation is unlikely to win the hearts and minds needed across the organisation. 2. Federated innovation over central control AI has the potential to deliver innovation at previously unimaginable speeds. It lowers the cost of experiments and acts as an idea generator — a sounding board for novel approaches. It allows people to generate multiple solutions in minutes. A great way to slow down all innovation is to funnel it through some central body/committee/approval mechanism. Bureaucracy is where ideas go to die. Nobel-winning philosopher F. A. Hayek once said, “There exist orderly structures which are the product of the action of many men but are not the result of human design.” He argued against central planning, where an individual is accountable for outcomes. Instead, he favoured “spontaneous order,” where systems emerge from individual actions with no central control. This, he argues, is where innovations such as language, the law and economic markets emerge. The path between control and anarchy is difficult to navigate. Companies need to find a way to “hold the bird of innovation in their hand”. Hold too tight — kill the bird; hold too loose — the bird flies away. Unfortunately, many companies hold too tight. They do this by relying too heavily on a command-and-control structure — particularly groups like legal, security and procurement. I’ve watched them crush promising AI pilots with a single, risk-averse pronouncement. For creative individuals innovating at the edges, even the prospect of having to present their idea to a committee can have a chilling effect. It’s easier to do nothing and stay away from the ‘large hand of bureaucracy’. This kills the bird — and kills the delicate spirit of innovation. AI can supercharge innovation capabilities for every individual. For this reason, we must federate innovation across the company. We need to encourage the most senior executives to state in plain language what the appetite is for risk in the world of AI and to explain what the guardrails are. Then let teams experiment unencumbered by bureaucracy. Central functions shift from gatekeepers to stewards, enforcing only the non-negotiables. This allows us to plant seeds throughout the organisation, and harvest the best returns for the benefit of all. 3. Concrete tasks over abstract work Early AI pioneer Herbert Simon is the father of behavioral science, a Nobel and Turing Prize winner. He also invented the idea of bounded rationality. This idea explains that humans settle for “good enough” when options grow beyond a certain number. Generative AI follows this approach (possibly because it is trained on human data, it mimics human behaviour). Generative AI is stochastic — every time we give the same input, we get a different output — a “good enough” answer. This is very different from the classical model we are used to — given the same input, we get the same output every time. This stochastic model, where the result is unpredictable, makes modelling top-down use cases even more difficult. In my experience, projects only clicked once we sat with the users and really understood how they worked. Early in our development of the Workhuman AI assistant, generic high-level requirements gave us very odd behaviors and was unpredictable. We needed to rewrite the use cases as more detailed, low-level requirements, with a thorough understanding of the behaviour and tolerances built in. We also logged every interaction and used this to refine the model behaviour. In this world, general high-level solution

Designing for humans: Why most enterprise adoptions of AI fail Read More »

Hard lessons from a chaotic transformation

Persistence pays off after three years of digital and cultural change at a tourism giant. Approximately 70% of all digital transformation initiatives fail to achieve their goals, often because companies chase shiny new tech while forgetting to address fundamental problems. I experienced this firsthand during a three-year project. As part of a team with several consultants and research experts, I was involved in the nearly $1 billion transformation of one of the world’s largest tourism companies. What sounds like AI and blockchain primarily involved tasks like unravelling decades-old legacy systems and dismantling silo structures. It wasn’t glamorous, but it was transformative. Together with my colleague at Deloitte Canada, I was able to initiate a paradigm shift, from a traditional supply chain mindset to a supply network approach. Big plans, big challenges The tourism group’s extensive transformation initiative was triggered by a wake-up call from a top executive. In 2015, the manager publicly admitted the company was decades behind in terms of technology. So the parent company ordered a modernization of all subsidiaries, and management promptly allocated hundreds of millions of dollars to migrate all areas to modern platforms, from procurement to warehousing. As project managers, we quickly realized this digital dream meant one thing: chaos. [ Learn how to succeed with large-scale legacy IT transformations and what legacy tech can teach IT leaders about projects that last ] On paper, it was simple: replace legacy systems, standardize processes, and integrate data across the company. In practice, we were dealing with a 30-year-old organization that accumulated many layers of processes. Employees simply had to make do with outdated methods. For example, the purchasing system was a patchwork of homegrown tools and spreadsheets that different teams customized to their own needs. Each department — parks, hotels, restaurants, retail — operated in its own bubble. Projects were often implemented in isolation, with benefits accruing only to the teams that implemented them. This led to a multitude of disjointed solutions. We gradually identified these strategic mismatches between technology investments and business requirements as the project progressed. The complexity was also compounded by the size of the company, which is more like an ecosystem of 180 locations worldwide having to be supplied daily with diverse goods from company-owned and external warehouses, as well as from suppliers. So it wasn’t a simple linear supply chain, rather a widely branched network of internal units and partners. And virtually every part of the organization was intertwined with this network, so changes in one area could impact suppliers, warehouses, and even the customer experience. Accordingly, we communicated to management from the outset that this transformation was socio-technical in nature, and that we were changing a living organism, not a machine. This was also underscored by the project’s first major initiative — the introduction of a new source-to-pay procurement platform. Not just an IT project, it involved six different departments, each with its own processes and priorities. Bringing these into one system required resolving some long-standing conflicts. For example, finance wanted strict controls, while operations wanted more flexibility. We had plans to improve efficiency and transparency, but we faced equally significant pushback getting everyone on the same page. From supply chain to supply network Early on, we made a subtle but significant shift in our overall mindset. Instead of talking about a supply chain, we began thinking in terms of a supply network. This transformed our approach from abstract complexity to more effective management, and as a first step, we captured how orders, data, and decisions flow within the company. The findings confirmed that, above all, better coordination was needed, and more than fancy new algorithms. In fact, we also learned that complexity isn’t always a bad thing, but rather a reality that must be accepted. Our complex, adaptive supplier network system consisted of many self-organizing parts. If we tried to ignore these dependencies and force simplification, we would’ve only created new problems. One manager also warned against piling more complexity on an already convoluted environment, and instead work on strengthening existing connections between all members of the organization. So we shifted our focus and formed cross-functional working groups with representatives from all affected departments to address each key process to ensure everyone was pulling together, rather than just pulling their own. An example of network thinking in practice was the way we eliminated discrepancies between hotel operations and the rest of the business. Initially, hotels managed guest bookings and supply requirements almost independently of each other, and were unaware of new product launches or events that could lead to a surge in demand. The lack of coordination between hotels and other departments led to some nasty surprises. For instance, on busy weekends, key offers would be out of stock because the hotel team was unaware of a promotion, a classic case of strategic misalignment. To address this, we established new communication channels and integrated planning sessions, effectively reintegrating hotels into the overarching supply network. We began treating internal departments as part of the network rather than as isolated kingdoms. We also examined feedback loops within our system and discovered some were vicious cycles that exacerbated misalignment. We found that when stakeholders were poorly engaged in the design of a new process, for instance, the resulting solution didn’t meet their needs. This lack of fit then led to further stakeholder disengagement, resulting in even lower participation and poorer outcomes. Research later confirmed a pattern of weak stakeholder engagement, outdated technical skills, and structural issues led to misalignment and further project problems. We recognized this dynamic and established a rule that end users must be involved at every stage of design and implementation. We also temporarily decommissioned some core infrastructure that repeatedly caused failures, rather than build new features on a shaky foundation. Gradually, we transformed some vicious cycles into virtuous ones, where initial successes built trust and led to greater stakeholder acceptance for the next phase. Stakeholder and system work The most difficult part of this transformation wasn’t the technology but getting people to collaborate in new

Hard lessons from a chaotic transformation Read More »

UPS transforms air cargo operations with data, AI

Case Study Jul 25, 20256 mins CIOCIO 100Transportation and Logistics Industry Digital asset tracking and advanced communications are helping the global shipping company leverage AI and ML at its Worldport air hub to reduce costs, improve on-time performance, enhance operational safety, and deliver a better CX. Worldport, the worldwide air hub for UPS, has made Louisville Muhammad Ali International Airport in Louisville, Kentucky, the third-busiest cargo airport in the US. The 5.2 million square feet facility boasts more than 20,000 employees, 580 aircraft (290 of them large-body UPS jets), and moves about 560,000 packages per hour. “It’s a very intense operation,” says Alp Kayabasi, president of IT at UPS. Until recently, asset tracking at Worldport involved labor-intensive, error-prone, and inefficient manual processes. Moreover, communication between load planners and ground crews relied on land mobile radio and was severely constrained. It required close proximity, from ramp to building, Kayabasi says, meaning decentralized planners had to be physically that close to facilitate any contact. Such a setup limited UPS’s visibility, hindered efficient operations, and prevented centralized management. Identifying the challenges impeding UPS’s operations at Worldport was one thing, resolving them was another. But in June 2023, UPS started working on its solution called the Gateway Technology Automation Platform (GTAP), a way to leverage AI and machine decision-making to automate manual tasks, optimize resource utilization, and centralize exception management. Kayabasi says GTAP helped UPS identify $13.5 million in savings in 2024, and has earned UPS a 2025 CIO 100 Award in IT Excellence. “This year, we’re projecting about $24 million in cost savings out of this initiative,” he says. “Beyond that, it improves our on-time performance, safety of our operations, and provides the best service to our customers.” Weight of expectation One area Kayabasi and his team identified early on was the need to digitize unit load devices (ULDs), containers to load freight and mail on wide-body and some narrow-body aircraft. “We have over 60,000 of these ULDs all over the world,” Kayabasi says. “When an aircraft lands and is being emptied or loaded, there’s a swarm of employees around it doing everything from fueling the aircraft and maintaining it, to tracking the ULDs. The aircraft has to have a distributed weight and balance protocol as well.” All these things, Kayabasi explains, were coordinated through radio communications. “Having the devices electronically attached and our crew members efficiently communicating with each other is what we wanted to modernize,” he says. The team created smart ULDs by digitizing them and integrating advanced sensors and communication practices to provide precise global tracking and real-time location services. But designing the sensors for the smart ULDs wasn’t easy. There was no off-the-shelf, commercial solution that fit UPS’s needs. The sensors needed to be capable of providing frequent updates, and they needed to be shock resistant. Battery management for the sensors turned out to be a sticking point, too. It would’ve been prohibitive to charge the device every time a ULD came off an aircraft. Furthermore, the devices were inside aircraft or enclosed buildings the majority of the time, so solar wasn’t a viable option. At the same time, the devices had to adhere to strict FAA and FCC regulations. UPS engineered a power management system to ensure the longevity of the batteries without the need for frequent recharging. To meet regulatory requirements, the team worked closely with the respective bodies to certify the technology. They designed radio communication protocols that automatically adjust to a ULD’s environment to remain compliant. Ramping up comms In addition to the smart ULDs, UPS developed Ramp Chat as part of GTAP to eliminate Worldport’s reliance on land mobile radio. Ramp Chat is a communication platform to centralize load planning operations, and as a mobile application with multiple carrier capabilities and backup communication protocols, it ensures high degrees of reliability. These two aspects of GTAP have given UPS the ability to leverage AI and ML to make operations even more efficient. An algorithm also determines which tugs — freight transfer vehicles — are best positioned to most efficiently load and unload ULDs, and AI helps balance weight and adjust equipment on aircraft, predictively determining where around the world additional ULD inventory is needed. “The balance and movement of those assets are now artificial intelligence-driven,” Kayabasi says. “That was something we couldn’t have done without proper tracking of these assets.” A collective effort Kayabasi notes that a product management mindset and lean agile practices were key factors in successfully delivering GTAP by giving the team the ability to learn and deliver new features very quickly. His most significant advice, though, is to always use the business rather than technology as the lens to focus efforts. “How do we ensure the investment we’re making in the technology moves the business towards their objectives?” he asks. “If their objective is better communication, how do you enable that? If it’s reducing their costs, how do you enable that?” Above all, overcommunicate with your stakeholders. “Change is hard, but by over communicating, creating training plans, and safety around the change you’re making, you can gain allies to help support you moving forward,” he says. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Read More

UPS transforms air cargo operations with data, AI Read More »

世界のAI規制について知っておくべき5つのこと

ダイナミックな世界のAI規制の動向を理解することは、もはや一部の専門家や政策担当者だけの課題ではありません。ビジネスの最前線に立つ企業人から、日々の生活でAIサービスを利用する一市民に至るまで、すべての人々にとって不可欠な知識となりつつあります。本稿では、この複雑な全体像を把握するために、特に重要となる5つの視点から、世界のAI規制の潮流を深く、そして分かりやすく解き明かしていきます。 1. 先行するEUの「リスクベース・アプローチ」とは何か 世界のAI規制の議論をリードしている存在として、まず注目すべきは欧州連合(EU)の動向です。EUは、世界に先駆けて包括的なAIの法的枠組みである「AI法(AI Act)」の制定を進めており、その中核をなすのが「リスクベース・アプローチ」という考え方です。これは、AIがもたらすリスクをその深刻さに応じて階層化し、それぞれのリスクレベルに見合った規制を課すという非常に合理的かつ体系的なアプローチです。 具体的には、AIシステムを四つのカテゴリーに分類します。最も厳しい規制の対象となるのが「許容できないリスク」を持つAIです。これには、人々の行動を潜在意識下で操作して危害を加えたり、社会的なスコアリングによって不利益な扱いをしたりするような、EUが掲げる基本的価値観や人権を根本から脅かすと見なされるAIが含まれ、原則としてその使用が禁止されます。 次に位置するのが「ハイリスク(高リスク)」なAIで、ここがEUのAI規制の核心部分となります。例えば、自動運転車、医療診断支援システム、採用活動における人材評価ツール、あるいは司法や法執行機関で用いられるAIなどがこれに該当します。これらのAIは、人々の生命や健康、安全、そして基本的な権利に重大な影響を及ぼす可能性があるため、市場に投入される前に厳格な適合性評価を受けることが義務付けられます。開発者は、使用するデータの品質管理、技術文書の作成、人間による適切な監視体制の確保、そして高いレベルの堅牢性や正確性、サイバーセキュリティを保証しなければなりません。違反した場合には巨額の制裁金が科される可能性があり、企業にとっては極めて重要なコンプライアンス要件となります。 三つ目のカテゴリーは「限定的なリスク」を持つAIです。チャットボットのように、人間がAIと対話していることを認識する必要があるシステムがこれにあたり、利用者に対してその旨を透明性をもって開示する義務が課されます。最後に、これら以外の大多数のAIアプリケーションは「最小限のリスク」と分類され、特に新たな法的義務は課されず、既存の法律の範囲内で自由な開発と利用が奨励されます。 このリスクに応じた規制のグラデーションを設けるEUのアプローチは、個人の権利保護とイノベーションの促進という二つの要請を両立させようとする野心的な試みであり、その包括性と具体性から「ブリュッセル効果」として、世界各国のAI規制のモデルケースとなりつつあるのです。 2. 米国の「イノベーション重視」とセクター別規制の実際 EUが包括的でトップダウンな規制の道を歩む一方で、アメリカ合衆国は異なるアプローチを選択しています。米国が最も重視するのは、経済成長の源泉である「イノベーションの維持」です。過度な規制が技術の発展や産業の国際競争力を削いでしまうことを警戒し、EUのような統一された包括的な法律の制定には慎重な姿勢を貫いています。 その代わりに米国が採用しているのが、既存の省庁や規制当局がそれぞれの所管分野に応じてAIを監督する「セクター別アプローチ」です。例えば、金融分野におけるAIの利用は証券取引委員会(SEC)が、医療分野では食品医薬品局(FDA)が、そして運輸分野では運輸省(DOT)が、それぞれの専門知識と既存の法規制の枠組みを活用して対応します。このアプローチの利点は、各分野の特殊性を考慮した、より柔軟できめ細やかな規制が可能になる点にあります。しかし同時に、政府全体としての一貫性が欠如し、規制の抜け穴や重複が生じる可能性も指摘されています。 このような状況の中で、米国のAIガバナンスにおける羅針盤の役割を果たしているのが、米国国立標準技術研究所(NIST)が策定した「AIリスク管理フレームワーク(AI RMF)」です。これは、企業や組織が自主的にAIのリスクを管理し、信頼できるAIを設計、開発、展開するための実践的な手引きを提供するものです。 法的拘束力を持つ「法律」ではなく、あくまで任意で活用される「ガイドライン」という位置づけですが、AIのリスクを「マッピング、測定、管理」し、「ガバナンス」を確立するための一連のプロセスを具体的に示しており、多くの企業にとっての事実上の標準(デファクトスタンダード)となりつつあります。近年の大統領令では、連邦政府機関に対してこのフレームワークの採用を義務付けるなど、その重要性は増しています。 米国の姿勢は、民間企業の活力を最大限に引き出し、市場のダイナミズムを通じて責任あるAIのあり方を模索していくという、自由主義経済の思想が色濃く反映されたものと言えるでしょう。EUの厳格な法規制と米国の柔軟な自主規制という対照的なアプローチは、今後の世界のAIルール形成において、緊張と協調の関係を織りなしていくことになります。 3. 中国の「国家主導」と社会統制におけるAI規制 EUの「人権中心」、米国の「イノベーション中心」というアプローチに対し、中国は「国家主導」という全く異なる軸足でAI規制を展開しています。中国におけるAI戦略は、経済発展の加速という側面に加え、国家の安全保障と社会の安定を維持するという極めて強い政治的な目的と不可分に結びついています。そのため、中国のAI規制は、トップダウンで迅速、かつ特定の技術領域に焦点を当てた強力なものであるという特徴を持っています。 特に近年、世界的な注目を集めた生成AIの分野では、その影響力の大きさを警戒し、いち早く「生成AIサービス管理暫定弁法」を施行しました。この規則では、生成AIサービスを提供する事業者に対し、社会主義の核心的価値観を遵守することや、国家の安全を脅かすコンテンツを生成しないこと、そして生成されたコンテンツには明確なラベル付けを行うことなどを義務付けています。 また、アルゴリズムを利用してユーザーに情報やサービスを推薦する「アルゴリズム推薦技術」についても、世論操作や中毒性の高いコンテンツの拡散を防ぐ目的で詳細な規制を導入しています。利用者がアルゴリズム推薦を拒否する選択肢を持つことや、事業者がアルゴリズムの基本原理を公開することなどが求められます。 これらの規制の根底には、AI技術が社会に与える影響を国家の管理下に置き、コントロールしようとする明確な意図が見て取れます。これは、欧米が個人の自由や権利を起点に考えるのとは対照的です。さらに、中国のAI規制は、広範なデータ収集と活用を前提とした社会信用システムのような、独自の社会システムと連携している点も看過できません。 国家が膨大な国民のデータを掌握し、それをAIで解析することで、産業振興と社会統制の両面で強力な力を発揮する構造が構築されつつあります。中国政府は、規制を通じて国内のAI産業を保護・育成し、特定の分野で世界的なリーダーシップを確立しようという国家戦略を明確に持っており、そのためのルールを戦略的に形成しているのです。この国家主導のアプローチは、技術開発のスピードと社会実装の規模において驚異的な成果を生む可能性がある一方で、国際社会からはデータの扱いや個人の自由の制約といった点について、深刻な懸念が表明されています。 4. 日本の「ソフトロー」と「人間中心のAI」が目指すもの 欧米中という三つの大きな潮流の中で、日本は独自の立ち位置を模索しています。日本のAI規制における基本的なアプローチは、法律による厳格な義務付け、いわゆる「ハードロー」を直ちに導入するのではなく、ガイドラインや原則といった、法的拘束力のない「ソフトロー」を中心とすることです。この背景には、技術の進展が非常に速いAIの分野では、硬直的な法律がすぐに時代遅れになり、かえってイノベーションを阻害しかねないという慎重な判断があります。 日本政府が掲げる理念は「人間中心のAI」です。これは、AIが人間の尊厳と個人の自律を尊重し、多様な背景を持つ人々が幸福を追求できる社会の実現に貢献すべきだという考え方です。この理念を実現するために、内閣府のAI戦略会議が「人間中心のAI社会原則」を策定し、その具体的な実践の手引きとして、総務省と経済産業省が共同で「AI事業者ガイドライン」を公表しています。 このガイドラインは、AIを開発する事業者や提供する事業者、そして利用する事業者に対して、公平性、説明責任、透明性といった原則を遵守し、プライバシー保護やセキュリティ確保に努めるよう自主的な取り組みを促すものです。このソフトロー・アプローチの利点は、技術の変化や社会の状況に応じて、柔軟かつ迅速に内容を見直せる点にあります。また、企業にとっては、画一的なルールに縛られることなく、自社の事業内容やリスクに応じて最適な対策を講じることが可能になります。 しかし、その一方で、自主的な取り組みに委ねるだけでは、悪意のある事業者や安全意識の低い事業者によるリスクを十分に防げないのではないかという批判も存在します。特に、EUのAI法がハイリスクAIに対して厳格な義務を課す中、日本のソフトローだけで国際的な信頼を得て、日本企業がグローバル市場で不利にならないようにするための対応が課題となっています。 そのため、日本政府はEUとの対話を重ね、日本のガイドラインがEUのAI法が求める水準と実質的に同等であることを示そうと努めています。将来的には、ソフトローを基本としつつも、特にリスクの高い領域については、既存の法律の改正や限定的な法整備といったハードローとの組み合わせ、いわゆる「スマート・レギュレーション」へと移行していく可能性も議論されており、日本は今、柔軟性と実効性のバランスをどのように取るかという重要な岐路に立っています。 5. 国境を越えるAIと「国際的なルール形成」の最前線 これまで見てきたように、AIに対する規制のアプローチは国や地域によって大きく異なります。しかし、AI技術やそれを用いたサービスは、インターネットを通じて瞬時に国境を越えていきます。ある国で開発されたAIが別の国の市民に影響を与え、ある国で収集されたデータが別の国で学習に利用されるのが日常です。 このようなグローバルな性質を持つAIに対して、一国だけの規制で対応するには限界があります。特定の国が規制を強化すれば、企業はより規制の緩い国へと拠点を移してしまう「規制の底辺への競争」が起こるかもしれません。逆に、各国の規制がバラバラで相互に矛盾するものであれば、グローバルに事業を展開する企業は複雑なコンプライアンスコストに直面し、国際的なイノベーションが停滞する恐れもあります。 こうした課題に対応するため、国際社会ではAIに関する共通のルールや原則を形成しようという動きが活発化しています。その代表的な例が、G7(先進7カ国)の枠組みで進められている「広島AIプロセス」です。これは、2023年に日本の主導で開始されたもので、生成AIをはじめとする高度なAIシステムのリスクを軽減し、信頼できるAIの実現に向けた国際的な指針や行動規範を策定することを目的としています。ここでは、EU、米国、日本といった異なるアプローチを持つ国々が協力し、開発者向けの国際的な行動規範について合意するなど、具体的な成果を生み出し始めています。 また、経済協力開発機構(OECD)も早くからAIに関する議論を主導しており、2019年に策定した「OECD AI原則」は、包摂的成長、持続可能な開発、人間の価値中心、公平性、透明性と説明可能性、堅牢性・安全性、説明責任といった項目を掲げ、多くの国々の政策の基礎となっています。 これらの国際的な取り組みの目的は、世界中に単一の法律を強制することではなく、各国の規制が相互に運用可能であること、すなわち「相互運用性(interoperability)」を確保することにあります。それぞれの国の法制度や文化を尊重しつつも、AIの安全性や信頼性に関する基本的な価値観を共有し、企業が国境を越えて円滑に活動できる予測可能な環境を整えること。それが、この国際的なルール形成の最前線で目指されている姿です。このグローバルな議論の行方は、今後のAI技術の発展の方向性だけでなく、未来の国際秩序のあり方をも左右する、極めて重要な意味を持っているのです。 ニュースレターを購読する 編集者から直接受信トレイに その日のトップニュースを直接受信する • CIO.comのニュースレターは、幅広い経営トピックと技術トピックをカバーしています。経営トピックとしてはIT戦略、イノベーション、ITマネジメント、CIOのあるべき姿、雇用とスタッフ管理、ITサービスと製品調達、ITオペレーション、関連規制への対応などをカバー。技術トピックとしてはクラウド・コンピューティング、人工知能(予測分析とジェネレーティブAIを含む)、アナリティクスとビジネス・インテリジェンス、データセンター、データ管理と戦略、エンタープライズ・ソフトウェア、セキュリティ、アイデンティティ管理とプライバシー、モバイル・コンピューティング、コラボレーション・テクノロジー、リモートワーク、ソフトウェア開発とデブオプス、ネットワーキング、システム管理・モニタリングなどをカバー。中堅・大企業、政府機関、研究機関向けにグローバルな情報をお届けします。以下にメールアドレスを入力して開始してください。 この著者の記事 もっと見る Read More

世界のAI規制について知っておくべき5つのこと Read More »

ViralPulseAI: AI social media management

In today’s fast-paced digital landscape, maintaining an active and engaging social media presence is crucial for businesses aiming to connect with their audience and enhance brand visibility. ViralPulseAI is an AI-powered tool designed to streamline this process by generating tailored social media content based on real-time industry news and trends. This article provides an overview of ViralPulseAI, its key features, target audience, and pricing structure to assist business owners, professionals, and decision-makers in evaluating its suitability for their needs. Key Features AI-Powered Content Generation: ViralPulseAI analyzes global news sources to identify trends specific to your industry, transforming this information into platform-optimized social media posts for Facebook, LinkedIn, Instagram, and Twitter. Automated Scheduling and Optimization: The tool automatically crafts and schedules posts at optimal times for maximum visibility, taking the guesswork out of social media management. Multifaceted Content Creation: Beyond text posts, ViralPulseAI generates captions, images, and videos based on trending topics, allowing users to diversify their content without additional effort. Customizable Posts: Users have full control to review, tweak, and schedule posts, ensuring the content aligns with their brand’s voice and messaging. Who Is It For? ViralPulseAI is tailored for: Small Business Owners: Those with limited time for marketing can utilize the tool to keep their social media feeds active and professional-looking. Entrepreneurs and Freelancers: Individuals seeking fresh content ideas and consistent posting schedules to boost visibility and engagement. Busy Professionals: Professionals aiming to build trust and stay top-of-mind with their audience without spending hours on content creation. Agencies Managing Multiple Client Accounts: Agencies can efficiently launch or revive dormant social accounts with daily content, offering an affordable solution compared to hiring a social media agency. Pricing ViralPulseAI offers several pricing plans to cater to different needs: Simple Plan: $24 per month or $288 annually, includes 40 AI-generated social media posts (10 each for Facebook, Instagram, X (Twitter), and LinkedIn), with AI-generated images for Facebook and Instagram. (saasworthy.com) Basic Plan: $28 per month or $336 annually, offers 80 AI-generated posts (20 each for the mentioned platforms) with AI-generated images. Pro Plan: $60 per month or $720 annually, includes 20 reels, 12 carousels (3 slides each), 16 posts, 1 character for reels, 4 scenes for reels, and 1 carousel design. Plus Plan: $83.25 per month or $999 annually, provides 120 AI-generated posts (30 each for the platforms), with AI-generated images. A 7-day free trial is available, allowing users to experience the service before committing to a subscription. A one-time setup fee of $15 is required to activate the trial. Final Thoughts ViralPulseAI offers a comprehensive solution for businesses seeking to enhance their social media presence through AI-driven content generation. Its features, including automated scheduling, multifaceted content creation, and customizable posts, cater to a diverse range of users, from small business owners to large agencies. The transparent pricing structure and free trial provide an opportunity to assess the tool’s effectiveness in meeting specific business needs. Visit viralpulse.ai for more. Read More

ViralPulseAI: AI social media management Read More »

Scroll to Top