ContentSproute

us-technology

Sony’s 5-star wireless noise-canceling headphones are 43% off today

Image: Sony Life is noisy and hectic, and sometimes you just want to focus on whatever it is you’re listening to, whether it be your favorite artist’s newest album, your favorite podcast, or your favorite audiobook. Well, if you’re in need of a new set of noise-canceling headphones, the fantastic Sony WH-1000XM4 is on sale for $198 on Amazon, a massive 43% discount down from its original $350 MSRP. These are easily some of the best wireless cans with active noise cancellation. A few years back, our friends at Tech Advisor reviewed the Sony WH-1000XM4 and gave it a perfect 5-star score and their Editors’ Choice award for its noise cancellation performance, audio performance, and how light and comfortable they are. Also worth noting is that these headphones deliver a fantastic 30-hour battery life, plus a quick 10-minute charge gives another 5 hours of playback. The headphones feature touch sensor controls, enabling users to pause, play, or skip tracks, control volume, or activate a voice assistant. It has a slight learning curve, but it’s nothing so bad that you’ll need more than a day to get used to. These headphones are flawless and absolutely worth it for over $150 off on Amazon. They’re an instant buy for anyone who wants fantastic noise canceling, great battery life, and an overall 5-star experience. Save 43% on one of the best wireless noise-canceling headphones Author: Gabriela Vatu, Deals Editor, PCWorld Gabriela has focused on tech writing for 12 years, covering news, reviews, buying guides, deals, and more. She has bylines in numerous consumer tech publications, including PCWorld, Macworld, PCMag, IGN, MakeUseOf, XDA, Android Police, and Pocket-lint. Read More

Sony’s 5-star wireless noise-canceling headphones are 43% off today Read More »

Best 4K monitors: HDR, gaming, budget, and best overall

Image: Matt Smith/Foundry 4K resolution is now within reach of everyday PC users, promising a massive improvement to image quality, with four times the pixels of a 1080p display. A 4K monitor is useful not only in games and movies but also when editing documents or browsing web pages. The extra pixels make text look clear and crisp. After extensive testing, I’ve determined that the recommendations listed below are the best 4K monitors available today, covering a variety of budgets and use cases. Dell Ultrasharp U3225QE – Best 4K monitor Pros Functional, professional design Thunderbolt 4, USB-C, Ethernet, and 140 watts of Power Delivery New generation of IPS Black boosts contrast ratio Excellent sharpness from 31.5-inch 4K panel 120Hz refresh rate with VRR Cons Contrast ratio is good for an IPS panel, but still behind VA and OLED panels HDR brightness is decent, but contrast remains limited Doesn’t have speakers Best Prices Today: $1029.99 Who should buy the Dell U3225QE? Anyone who wants a well-rounded 4K monitor at the center of a home office will be well served by the U3225QE. This is a sharp 31.5-inch 4K IPS Black panel with excellent color accuracy and an above-average contrast ratio for a productivity monitor. The 120Hz refresh rate enhances responsiveness, making tasks like scrolling through documents and multitasking smoother. The ergonomic stand provides height, tilt, swivel, and portrait mode adjustments, and the anti-glare coating ensures comfortable viewing in various lighting conditions. In addition to its stunning image quality, the U3225QE’s connectivity eliminates the need for external docks and adapters. It features Thunderbolt 4 / USB-C ports with 140W of Power Delivery—enough to charge most laptops—along with HDMI, DisplayPort, and a secondary DisplayPort for daisy-chaining multiple monitors. A built-in 2.5Gbps Ethernet port ensures a fast and stable wired network connection, a rare feature among monitors. It also offers six USB-A ports, a pop-out USB hub with quick-access USB-C and USB-A ports, and KVM switch functionality for seamless switching between two connected PCs. Dell U3225QE: Further considerations While this monitor lacks built-in speakers and its HDR performance is limited, the U3225QE’s extensive connectivity and high-quality display make it a standout choice for professionals who need a central hub for their workspace. Want a smaller version of this monitor? Check out the Dell Ultrasharp U2725QE. It has a similar 4K IPS Black display panel and Thunderbolt 4 / USB-C connectivity. Read our full Dell Ultrasharp U3225QE review Dell S2722QC – Best budget 4K monitor Pros Uniquely affordable USB-C monitor 4K resolution with HDR option High brightness and good color accuracy Integrated speakers Cons Low contrast ratio saps SDR vibrance Mediocre color gamut Best Prices Today: Who should buy the Dell S2722QC? This is the 4K monitor for shoppers who want quality on a budget. The monitor can often be found for less than its $380 MSRP. It stands out by blending this budget price with 4K resolution and USB-C connectivity typically reserved for more expensive models. Dell’s 27-inch IPS display delivers crisp visuals and a respectable brightness of 296 nits, suitable for most lighting conditions. In addition to an exceptionally clear, sharp 4K image, the monitor delivers color performance that’s more than adequate for everyday productivity, streaming, and light gaming. These perks are paired with a USB-C port with 65 watts of USB Power Delivery for charging a connected laptop or tablet, as well as two USB-A ports for connecting wiring peripherals. The monitor even ships with a sturdy yet compact ergonomic stand that adjusts for height, tilt, swivel, and pivot. Dell S2722QC: Further considerations The Dell S2722QC makes a few compromises to keep the price low. The IPS panel’s contrast ratio is low, which can make the image look flat and dull when playing games or watching a movie. It’s better suited for a home office than a gaming den. To be fair, though, quality 4K gaming monitors are usually more expensive.  Dell’s S2722QC offers a lot of bang for your buck. It doesn’t deliver the highest image quality in all areas, but its combination of 4K resolution, USB-C connectivity, and affordability makes it a solid choice if you want a tack-sharp home office monitor at a low price. Read our full Dell S2722QC review Asus ProArt PA279CV – Best budget 4K monitor for creatives Pros Accurate image High maximum brightness Menu settings allow calibration Has USB-C with 65 watts Power Delivery Competitive price Cons Unimpressive design Luminance uniformity could be better HDR is bright but otherwise falls short Best Prices Today: Who should buy the Asus ProArt PA279CV? The Asus ProArt PA279CV is an excellent choice for creatives who want an entry-level 4K monitor for professional use, but need to spend less than $500. The ProArt PA279CV is a 27-inch 4K monitor with excellent color accuracy, high maximum brightness, and a good contrast ratio for an IPS monitor.  This monitor also throws in USB-C connectivity. It’s not a great USB-C hub, as it has only a couple USB-A ports, but it offers 65 watts of Power Delivery for charging a connected laptop or tablet. Asus ProArt PA279CV: Further considerations While this isn’t the right monitor for gaming enthusiasts (see below), Asus throws in adaptive sync support compatible with AMD and Nvidia video cards. This prevents screen tearing and provides smooth motion in 3D games. The monitor has a maximum refresh rate of 60Hz.  Read our full Asus ProArt PA279CV review MSI MPG 272URX – Best 4K gaming monitor Pros 26.5-inch 4K OLED panel looks sharp Great contrast and color performance Strong motion clarity Respectable HDR performance Lots of connectivity including USB-C Cons Design is a bit bland Gamma, color temperature slightly off-target 4K OLED panel carries a premium price Best Prices Today: Who should buy the MSI MPG 272URX QD-OLED? The MSI MPG 272URX QD-OLED should be at the top of your list if you’re looking for a high-performance gaming monitor with a 240Hz refresh rate and a cutting-edge OLED panel. This 26.5-inch display delivers stunning 4K resolution with an ultra-smooth

Best 4K monitors: HDR, gaming, budget, and best overall Read More »

Media agencies scrutinize paid search adjustments amid zero-click signal confusion

As consumers shift the way they seek out information on the web thanks to AI, marketers and media agencies are beginning to rethink the role paid search plays in their media plans. With that rethinking, they’re monitoring a small set of dashboard indicators for signs that ad performance is slipping, or that their competitors are getting ahead. It’s not always a high-resolution picture. Most marketers’ experience suggests that web traffic is likely to be down this year — but that conversion rates from web visitors that actually buy something (or take another action, such as signing up to a newsletter) are holding up. AI search visitors themselves convert at 4.4 times the rate of average organic search visitors, according to Semrush. That’s led marketers to commission agencies like Havas, Dentsu and Kepler to overhaul their organic search approach – 57% of marketers have altered their search strategies since AI Overviews launched in 2024, per a survey by agency NP Digital – while leaving the paid media portion of their search strategy untouched. That status quo won’t hold for long. Google is ratcheting up its deployment of AI Overviews, with the feature now appearing on 47% of search result pages, according to DemandSphere. Most of those represent the kinds of search queries brands don’t often bid against, broad research queries that imply a web user is situated higher in the sales funnel, but media buyers suspect they won’t be limited for long. Meanwhile, usage of ChatGPT and Perplexity for search continues to climb. ChatGPT’s active user base reached 400 million at the start of this year while Perplexity added 2 million active users to reach 22 million total between October and the first half of 2025, per Business of Apps. Perplexity is currently toying with its ad product, while ChatGPT is expected to launch its own in the near future. Dashboard lights With those silhouettes on the horizon, media buyers are keeping a close eye on a range of indicators. “I’ve never looked at referral traffic so much in my life,” said Eric Hoover, SEO director at Kepler. He noted referrals have taken on an outsized importance as a key indicator that web visitors are arriving from an AI-generated summary. The volume, prominence and tone of citations in AI summaries are worth watching, too. “The new ranking signal is the number of citations … it’s a really strong signal of good brand presence,” Hoover explained. Search practitioners are also keeping a close eye on the cost-per-click (CPC) rates of Google’s search ad inventory. CPCs increased 9% during the second quarter of the year, according to Tinuiti. The metric reflects changing supply and demand dynamics – some of which are due to AI search user behavior, and some to Google’s own tinkering with the landscape of search results pages. Because AI Overviews appear at the top of results pages, they’ve pushed paid ads that do appear alongside them further down search results pages. Marketers have prioritized spending on keywords less likely to generate an AI summary, driving up CPCs. “Everyone’s chasing the same queries now, and it’s becoming really competitive,” said Jeff Eisenfeld, director of search at Media by Mother. “We’re seeing that AI Overviews are reshaping search behavior and, in many cases, driving up CPCs as advertisers compete for fewer clicks,” agreed Brooke Hess, vp paid media at NP Digital, in an email. But AI Overviews aren’t the only culprit. Today’s SERP is a crowded window panel rather than a blue-link directory; visual search results and suggested-search features carry blame as well. “It’s hard to pinpoint attribution,” said Daniel Toplitt, evp, search and digital experience at IPG’s Kinesso performance marketing unit. And Amazon’s recent, abrupt withdrawal from investment on Google Shopping ads has further complicated the situation. Price fluctuations might be consequences of the e-commerce giant’s retreat, or evidence that brands at large are changing their spending in response to AI search. “The working thesis is that falling organic and paid search clicks, prompted by increased zero-click behavior, are exerting upward pressure on CPCs,” added Hess. “That said, other factors like increased advertiser competition, seasonal trends, or strategic bidding shifts could also play significant roles.” Short-term spending and long-term implications For the most part, overall spending on paid search has held steady. “If [they] see the opportunity to capture more revenue then they’ll be willing to spend more, of course. [But] I don’t think any client has come to use [for an] increase because of AI Overviews,” said Kenneth Yau, paid search managing partner at Dentsu. But a short-term playbook has emerged for those brands that have taken a hit to web traffic – often those in the e-commerce category. They’re increasing paid spend on brand and non-brand keywords as a means of “protecting” the searches that matter most for sales conversions. “We sometimes recommend increasing paid media investment in the short term to protect visibility as AI-driven search limits organic reach,” said Rachel Klein, svp of owned-and-earned media at Wpromote. But it’s a short-term pivot, a “band-aid” that might prove less effective should (well, when) AI Overviews begin appearing on ever-more search results.  “Long-term success in AI search (and search in general) depends on a holistic strategy that requires a focus on building authority, not exchanging organic tactics for paid,” she warned. Brands can’t ignore the “foundational” work required to meet the zero-click challenge for long. That foundational work might well require rethinking what, from an advertiser’s perspective, search is actually for.  “Zero-click search is affecting search. It’s affecting PR, it’s affecting commerce,” noted Michael Sondak, svp, and head of search for Omnicom Media Group North America. As such, a response to the zero-click challenge might require a response joining the dots from each of those quarters – one that reflects a brand’s overarching strategic aims, not just the need to drive web users farther down the sales funnel. “I am less concerned about a CPC increase. The question that I ask my teams from a paid media perspective is:

Media agencies scrutinize paid search adjustments amid zero-click signal confusion Read More »

Target’s next CEO shares his 3 top priorities in returning to growth

Fiddelke, Target’s chief operating officer, has been with the company for more than two decades in many different departments and leadership positions. For most of 2024, he served as CFO and COO concurrently, and in May, the company said Fiddelke would lead an “enterprise acceleration office” to drive speed and agility across the company by simplifying cross-company processes and using technology and data in new ways. The company made the announcement before its second-quarter earnings call Wednesday morning. Fiddelke will take the role in February 2026, with current CEO Brian Cornell becoming executive chair of the board of directors. This comes after several underwhelming quarters with declining or near-flat sales year over year. The second quarter was no exception: While net sales improved from the first quarter, they were still down 0.9% year over year. “I’ve seen how our business can perform when we’re at our best, and therefore, where we also have clear opportunities today to improve our performance — and we must improve,” Fiddelke said on the company’s second-quarter earnings call Wednesday morning. “I know we’re not realizing our full potential right now, and so, I’m stepping into the role with a clear and urgent commitment to build new momentum in the business and get back to profitable growth.” Analysts doubt whether Fiddelke will reinvent the wheel enough to get the company back on track, being a longtime Target insider rather than an external hire. “We are unsure of how Mr. Fiddelke will change the strategy he helped create,” wrote Joseph Feldman, senior managing director and assistant director of research for Telsey Advisory Group. During the call with investors, Fiddelke outlined his three priorities in achieving growth and in reinforcing what he believes makes Target special. Priority 1: Reestablish Target’s distinct ‘merchandising authority’ Fiddelke said Target needs to “reclaim its merchandising authority,” as one of Target’s most critical attributes is having industry-leading style and design. “As you’ve seen over the last few years, even when overall results have fallen short of our aspirations, we’ve shown how strongly our guests respond when we offer the right blend of quality, value and style not seen anywhere else in the market,” Fiddelke said. He said, to do so, the retailer needs to make sure it’s bringing this authority across each category in its business throughout the year. “That will require change, and that change is happening,” Fiddelke added, noting that the company has already begun reshaping its hardlines assortment — now called “Fun 101.” “We’re already seeing positive comps and traffic growth in these categories, all from leaning into style and culture in much the same way we’re known for in our apparel assortment.” The hardlines segment is up more than 5% in Q2, according to Rick Gomez, Target’s chief commercial officer. This has included leaning into trading cards; Gomez said sales for the segment are up 70% so far this year. Trading cards are on track to deliver $1 billion in sales this year, according to the company. Gomez also said tech accessories like brightly colored headphones and phone cases and toys under $20 are leading share growth within the hardlines segment, and that the launch of the Nintendo Switch 2 game console will continue to drive sales in the back half of the year through the sale of hardware and software, as well as related apparel, toys and collectibles. Fiddelke added that the company needs to also bring this approach to its home category, as well as food and beverage, where he said the company has opportunities to build on newness and differentiation both from national brands and Target’s private labels. “Across the entire assortment, we have an opportunity to further leverage our merchandising authority through our more than $31 billion [private-label] portfolio, where we’ve spent decades building and refining our industry leading design and sourcing capabilities,” Fiddelke said. “The team behind these capabilities truly puts us in a category of one in our ability to read, shape, scale and deliver emerging merchandising and style trends at incredible value.” Priority 2: Elevate the store experience more consistently and frequently Target needs to improve the store experience, especially in its consistency, the incoming CEO said. The company is making progress, he added, noting that its on-shelf availability metrics in Q2 were the best the company has seen in years — particularly in “key items that should never be out of stock.” “We’re also seeing far greater consistency in our intraday inventory reliability, as well as between weekdays and key weekend shopping windows,” Fiddelke added. “We will continue to build on this momentum.” Love for the Target brand, he said, is fueled by an “elevated and joyful” shopping experience in stores and online. “Beyond the assortment we sell is how we sell it,” Fiddelke said. “We can never take for granted the love our guests show us when they affectionately refer to their local store as ‘my Target.’ That’s loyalty we need to consistently go out and earn from well-stocked shelves and clean stores to a friendly and helpful team and an online experience that brings inspiration and discovery. We want to delight our guests who shop with us, every time they shop.” Priority 3: More fully use technology to improve speed, guest experience and efficiency Through the company’s new growth office, Fiddelke said the company has identified the biggest challenges that slow it down. This includes legacy technology that doesn’t meet today’s needs, manual work that can be automated, unclear accountability, slow decision-making, siloed goals and a lack of access to quality data. He added that since the company’s last earnings call, it has deployed more than 10,000 new AI licenses across the company, though he didn’t specify what these include. “As we continue investing in our future growth, we’ll be making key technology investments throughout our stores, supply chain, headquarters, and digital operations to power our team and our business,” he said. He said Target is working to redesign large, cross-functional processes, like how it builds its merchandising and inventory plans, to clarify roles and access

Target’s next CEO shares his 3 top priorities in returning to growth Read More »

Marketers increasingly pressured to show their creator spend is worth it — with harder metrics

With creators, marketers are back at a familiar crossroads: measurement headaches. That moment always comes whenever ad dollars pile up around a particular part of the market — and with it, the scrutiny of finance directors. Creators are there now. A few advertisers got there early — L’Oréal was already pushing for measurement standards back in 2017 — but those were edge cases. The bulk of the industry leaned on vanity metrics and called it proof of success. That’s changing. More CMOs now admit they can’t afford to stay on that path. The more they move away from it, the clearer it gets: views and likes aren’t the finish line anymore. They’re the starting point for the harder question: does creator content actually move the business forward? Eight ad execs interviewed for this article said the answer is yes. Proving it, however, is still a work in progress. And it will stay that way until the market clears the structural and political hurdles that keep data siloed and standards fragmented. In the meantime, marketers are piecing playbooks — experimenting, hedging and hoping something sturdier emerges. “This year, we’ve seen a notable shift in how marketers evaluate creator campaigns, moving away from traditional vanity metrics like follower counts or basic engagement rates toward more tangible, bottom-of-funnel outcomes such as customer acquisition, conversions, and revenue attribution,” said Layla Revis, vp of marketing for social media management platform Sprout. How that plays out depends on how brands view creators.  Some advertisers like apparel seller Bombas treat creator marketing separately from organic social. With ad tech platform Agentio, it’s measuring bottom-of-funnel results — views, clicks, conversions and return on ad sales. So far, the brand has found that it comes less from follower engagement, more from whether the content drives real action. For instance, it saw 5.3 times more ROAS when working with hundreds of YouTube creators from Agentio than its typical YouTube campaigns. “We’ve had brands spend half a million dollars in under 48 hours with 40 to 60 of the best creators on YouTube,” said Arthur Leopold, CEO of creator ad platform Agentio. “That would have taken their influencer team six plus months.” Marketers don’t usually talk about creators this way. That’s programmatic language. And that’s the shift: this isn’t just about new tools — it’s about mindset. Creators are now a legitimate, ongoing part of media plans.  The difference is that unlike every other part of those plans, there’s still an intangible: influence itself. Measuring it was hard enough when creators were new. Now, in a saturated market, it’s harder still.  That’s why many marketers are wary of chasing conversions alone. It risks performance myopia and creative commoditization. To them, measurement also means sentiment, watch time and share rates. The bet is that if a creator can build brand salience and equity over time, the harder metrics will follow.  “When we talk about the measurement issue in this space we’re really talking about organic,” said Jamie Guffreund, owner of consultancy Creative Vision. This thinking is baked into native@AMV, Omnicom’s new creator shop. The agency is stitching together a measurement stack that blends qualitative and quantitative signals — using Kolsquare for creator identification and audience analysts, Sprinkle and Sprout for social benchmarking and channel management plus proprietary tools for direct access to creator and platform data.  It’s an effort to systematize the messy business of influence, without stripping it of what makes it valuable.  “Being part of that conversation and building that mental availability is what we’re really focusing on,” said the agency’s head Sam Reagan Asante. For IF7’s clients, they leverage custom studies with actual opt-in humans who are asked a series of questions, such as ‘how likely are you to purchase a product from brand X in the next 30 days?’ are shown the creator campaign content, and then asked the same question immediately following that exposure. This allows them to ideally illustrate a lift in these key metrics. We can then also look at each piece of creator content in a campaign and measure the efficacy of each using this same panel approach.  “We’re utilizing this pre and post methodology for brand lift studies,” said Harley Block, CEO of IF7. And they’re not the only ones.  In the spring, Whalar became the first creator agency to partner with Kantar. It now uses Kantar’s Link AI — a predictive testing tool for social creative — to forecast performance before campaigns run, then measure in-flight. With Kantar’s data, it can also factor in how external conditions shape results.  “Kantar has been a really interesting partner for Whalar because they’ve helped validate what we’re doing and subsequently get the confidence of the C-suite now that we can measure what we get around creators with Kantar’s other solutions like econometric modelling,” said Emma Harman, co-CEO of Whalar. Still, there’s little point in measuring how well creator content performs if the creator isn’t a fit to begin with. Marketers are starting to put just as much emphasis on measuring the creators themselves — not just their output. Call it pre-flight measurement.  Take Horizon Media’s full service social agency Blue Hour Studios. It has developed a scoring system for creators its clients can refer to. It uses AI to determine how likely a creator’s content is to actually be seen by an audience regardless of their raw follower count. To do so, the tool looks at consistency of posting, whether audiences engage — likes, shares, comments — and how well the content travels with platform algorithms.  The result is a single score that turns potential reach into a predictive metric, said Monika Ratner, head of growth at Blue Hour Studios.  “There are times where a marketer might think they need to work with a parent of a young family influencer, for instance, but that might not be the right approach to actually drive purchase intent with your audience,” Ratner continued. “It might be finding somebody who’s obsessed with cat videos or working with somebody who makes really posh

Marketers increasingly pressured to show their creator spend is worth it — with harder metrics Read More »

CTV looks to invest in creator content to win over more ad dollars

In 2025, CTV channel operators are widening their portfolios of creator content in a bid to capture more ad dollars. CTV companies such as Tubi, Samsung TV Plus and Netflix are leaning into creators, with all three expanding their creator offerings or announcing the production of original creator content in recent months as brands continue to raise their spending on the channel and on creators. By 2027, U.S. brands are projected to spend $13.7 billion annually on influencer marketing, according to eMarketer. On August 14, Tubi became the latest CTV company to announce the expansion of its creator business, adding thousands of videos to its creator portfolio and hiring Kudzi Chikumbu, TikTok’s former global head of creator marketing, as a vp of creator partnerships. “In the last two months, we’re seeing a steady increase in consumption of creator content, and we’re seeing great engagement with the content,” said Tubi general manager of creator programs and evp of business development Rich Bloom, who declined to provide exact figures, but noted that Tubi’s creator program had expanded “10X” since launching in June, going from six creators and 500 videos to roughly 60 creators and 5,000 videos. “A meaningful percentage of our overall viewers have watched at least one piece of creator content.” Tubi is far from the only CTV operator to invest in creators in 2025. Much of this activity is taking place on FAST channels — free, ad-supported television channels featuring creators’ videos. On July 28, for example, Samsung TV Plus announced its largest-ever slate of creator-led FAST channels, expanding its creator category from two channels to 10. Between the first and second quarters of 2025, viewership of the creator-powered CTV channel Creator Television skyrocketed, with the channel experiencing a 95 percent increase in total minutes viewed and a 111 percent increase in total user sessions, according to Creator TV co-founder and head of content Charlie Ibarra, who said his company’s advertising revenue was experiencing a corresponding rise but did not share specific numbers. CTV platforms have been testing creator-style content for years — from Pluto TV’s “People are Awesome” channel to Roku’s Originals and influencer-driven FAST offerings on Amazon Freevee, now rebranded to Free To Watch. Now, what felt exploratory is turning strategic, as streaming services push for more creator-backed content to differentiate themselves. “In our conversations with advertisers, they’re hungry for these types of opportunities — they’re hungry to figure out how to reach audiences in a new, organic and interesting way,” said Samsung TV Plus vp of content and programming Takashi Nakano. “This is really the next evolution of what we think of as the content space.” Value for advertisers Asked what sets Samsung TV Plus’s creator channels apart from platforms like YouTube, Nakano framed Samsung’s content selection process as “meticulous” and as a way for advertisers to be near higher-quality creator content compared to YouTube’s feed. Other CTV operators, from Tubi to Vevo, echoed the sentiment that creator-led CTV channels offer advertisers access to creators’ highly engaged fandoms through the “premium” television screen. “The lean-back, CTV experience is definitely a premium one, and usually the way we see co-viewing happening,” said Vevo vp of U.S. sales Melissa Sofo. Advertisers’ appetite for CTV is growing. After all, content creators have become an increasing focus for marketers, with brands such as Unilever indicating plans to spend half of their budgets on social channels by the end of 2025.  Dentsu Creative U.K. CEO Jessica Tamsedge told Digiday that she was “seeing real momentum in this area,” framing brands’ growing spending on CTV creator channels as an extension of their longstanding interest in creator content that shows up in ad-supported spaces. Tamsedge did not share specific numbers with regard to Dentsu Creative UK clients’ spending on CTV creators. “When you put those two forces together, the cultural pull of creators and the lean-in, high-attention environment of CTV, you get what we call a ‘double attention hit,’ which is unsurprisingly compelling for brands,” Tamsedge said. Although brands are increasingly interested in creator-led CTV content, this mounting interest is driven more by practical economics and measurement needs, rather than an urgent desire to show up alongside creators, according to Ogilvy executive director of connections and media Mack Leahy.  “Most brands are treating repurposed YouTube content on smart TV channels as discount premium video where they can check the ‘we’re working with creators’ box while accessing inventory that costs 30 to 40 percent less than comparable premium CTV placements,” Leahy said. “The creator element is almost incidental — they’d probably buy the same inventory if it was generic lifestyle content if the price was right and the environment was brand safe.” Original content on the horizon For now, the overwhelming majority of CTV creator channels simply repurpose content that is also available on other platforms such as YouTube. FAST channel operators using pre-existing creator content as their inventory need to convince advertisers of the value of showing up alongside creators on connected TVs, rather than the already-popular social platforms that have long been part of brands’ marketing mixes. More original creator content will soon be available on CTV. Alongside the expansion of its creator FAST channels, Samsung also announced the production of a 13-episode original series in collaboration with YouTuber Dhar Mann, which will stream exclusively on the creator’s Samsung TV Plus channel. On August 18, Netflix announced the production of an original series with YouTuber Mark Rober, which is slated to air in 2026.  Samsung and Netflix’s announcements are only the beginning. Dentsu US and UK are “investing in creators to develop and co-create content designed specifically for CTV and other non-social channels,” per Tamsedge. As for Tubi, the company plans to fund the production of original creator content for its channels by “later this year,” according to Bloom, who said that this would be a key responsibility for Chikumbu, Tubi’s new creator vp.  “I’m excited that he got the job; I think it was a really good move for them, based on

CTV looks to invest in creator content to win over more ad dollars Read More »

Media Briefing: Publishers catch new vibes from Meta on AI licensing

This Media Briefing covers the latest in media trends for Digiday+ members and is distributed over email every Thursday at 10 a.m. ET. More from the series → This week’s Media Briefing covers publishers picking up on Meta’s evolving message about AI content licensing deals. Publishers are picking up new vibes from Meta, which they believe signal that the platform may be changing its stance on AI licensing.  So far, it’s more rhetoric than reality. Nevertheless, if it were to come to fruition, it could reset the dynamic between Meta and publishers, many of whom still feel burned by years of declining referral traffic from the platform.  When publishers, retailers and cloud edge companies gathered at the IAB Tech Lab’s first workshop in New York City on 23 July, to discuss first steps in creating a standardized framework for AI content monetization and attribution, Meta and Google were also present.  Four publishers Digiday spoke to said that during the event, Meta’s message was that there is now more buy-in at the senior level within the company regarding the value of potentially forging closer ties with publishers of quality content. “[They] were clear that the new leadership knows that AI runs on good content, and that has moved Meta to engage more,” said a publishing exec of a major media brand who attended the event and agreed to speak under the condition of anonymity.   Of all the platforms, Meta has the fewest AI licensing deals, having partnered with Reuters last year to use its content to answer user questions in real time about news and current events.  Meta has radically reorganized its AI operations under a new division called Meta Superintelligence Labs, consolidating all its AI teams — from foundational model development to product engineering — under one roof. “Meta was very clear that with their new leadership – because they’ve obviously been building a Superintelligence team very quickly, and there’s completely new AI leadership – they were very clear that there’s a different attitude to accessing information going forward than what they had before,” said the same publishing exec. “Although it turns into a whole different question. They were very clear that their stance on this has changed, even though it may not have outwardly turned into action yet.” Meta declined to comment for this article. No road map is currently in the works and nothing formative has been established at Meta. No publishers are under any illusion about the fact that Meta is in the information-gathering phase. “My takeaway was that they are now approaching things differently and expanding how they source their information beyond their platform,” said another exec who attended the meeting.  Earlier this year, Raptive signed a contract with its creators and publishers, which means it can now handle AI licensing negotiations on their behalf. Since then, it has received an inquiry from Meta around the potential AI licensing of one of its publishers on the platform, though without any firm commitment, according to Raptive chief strategy officer Paul Bannister. “They weren’t, like, ready to do a deal. They were like, we’re just trying to figure out the lay of the land here and see what to do. So it does seem like they are trying to figure it out,” he said.  With no concrete details from Meta on what its Superintelligence unit will actually deliver, or what its appetite for AI licensing deals will really be, speculation is the only game in town for now. But various industry execs Digiday spoke to believe that in order to compete with AI rivals OpenAI, Google and Anthropic it needs to have its own access to new quality content – the type premium publishers have in spades. Even Amazon has acknowledged its need to closely pair with publishers, having recently signed AI licensing deals with the New York Times, Conde Nast and Hearst.  Despite what Google claims, publishers believe they can’t fully block its AI crawler without jeopardizing their search rankings, whereas there is no such catch with their blocking Meta’s crawler. And a quick glance at the robots.txt files of publishers including The Guardian, Washington Post, Financial Times, New York Times and News UK – publisher of the Times of London and tabloid The Sun – shows they’re all blocking the main Meta Llama crawler.  “Nearly no one is blocking Google from scraping their content, because if you block Google, you lose all the search traffic, so no one does it,” said Bannister. “OpenAI has negotiated a large number of deals with people [publishers] so even though a bunch of people are blocking them [OpenAI], they have access to a large swathe of high quality content, and everybody else, I think, is getting blocked pretty heavily these days, including Meta, because publishers have no incentive to let Meta scrape their content,” he added.   And when it comes to the AI race, neither Google nor Meta can afford to remain static. And Meta has some catching up to do, stressed several industry execs Digiday spoke to.  “The frontier models understand that in order to be truly useful, they need a model to get fresh news and data,” said a publishing exec under condition of anonymity.  What we’ve heard “Sites will be getting much less traffic, and advertisers will be spending more money per impression on sites, and more share on Google and answer engines than they do now… It has to be a paradigm shift with advertisers whereby they realize they will be paying more for more qualified eyeballs, but won’t be wasting their budgets on so many hit and run users. Advertisers won’t come around to that paradigm fast enough so many sites will shutter. We have a bumpy road ahead.” – A head of SEO at a large lifestyle publisher. IAB Tech Lab maintains momentum around AI remuneration standards framework  In the age of AI, “friction” has become publishers’ buzzword for survival.  Publishers are leaning hard into creating this – what is essentially friction

Media Briefing: Publishers catch new vibes from Meta on AI licensing Read More »

How AI ‘digital minds’ startup Delphi stopped drowning in user data and scaled up with Pinecone

Delphi, a two-year-old San Francisco AI startup named after the Ancient Greek oracle, was facing a thoroughly 21st-century problem: its “Digital Minds”— interactive, personalized chatbots modeled after an end-user and meant to channel their voice based on their writings, recordings, and other media — were drowning in data. Each Delphi can draw from any number of books, social feeds, or course materials to respond in context, making each interaction feel like a direct conversation. Creators, coaches, artists and experts were already using them to share insights and engage audiences. But each new upload of podcasts, PDFs or social posts to a Delphi added complexity to the company’s underlying systems. Keeping these AI alter egos responsive in real time without breaking the system was becoming harder by the week. Thankfully, Dephi found a solution to its scaling woes using managed vector database darling Pinecone. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage Architecting efficient inference for real throughput gains Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO Open source only goes so far Delphi’s early experiments relied on open-source vector stores. Those systems quickly buckled under the company’s needs. Indexes ballooned in size, slowing searches and complicating scale. Latency spikes during live events or sudden content uploads risked degrading the conversational flow. Worse, Delphi’s small but growing engineering team found itself spending weeks tuning indexes and managing sharding logic instead of building product features. Pinecone’s fully managed vector database, with SOC 2 compliance, encryption, and built-in namespace isolation, turned out to be a better path. Each Digital Mind now has its own namespace within Pinecone. This ensures privacy and compliance, and narrows the search surface area when retrieving knowledge from its repository of user-uploaded data, improving performance. A creator’s data can be deleted with a single API call. Retrievals consistently come back in under 100 milliseconds at the 95th percentile, accounting for less than 30 percent of Delphi’s strict one-second end-to-end latency target. “With Pinecone, we don’t have to think about whether it will work,” said Samuel Spelsberg, co-founder and CTO of Delphi, in a recent interview. “That frees our engineering team to focus on application performance and product features rather than semantic similarity infrastructure.” The architecture behind the scale At the heart of Delphi’s system is a retrieval-augmented generation (RAG) pipeline. Content is ingested, cleaned, and chunked; then embedded using models from OpenAI, Anthropic, or Delphi’s own stack. Those embeddings are stored in Pinecone under the correct namespace. At query time, Pinecone retrieves the most relevant vectors in milliseconds, which are then fed to a large language model to produce responses, a popular technique known through the AI industry as retrieval augmented generation (RAG). This design allows Delphi to maintain real-time conversations without overwhelming system budgets. As Jeffrey Zhu, VP of Product at Pinecone, explained, a key innovation was moving away from traditional node-based vector databases to an object-storage-first approach. Instead of keeping all data in memory, Pinecone dynamically loads vectors when needed and offloads idle ones. “That really aligns with Delphi’s usage patterns,” Zhu said. “Digital Minds are invoked in bursts, not constantly. By decoupling storage and compute, we reduce costs while enabling horizontal scalability.” Pinecone also automatically tunes algorithms depending on namespace size. Smaller Delphis may only store a few thousand vectors; others contain millions, derived from creators with decades of archives. Pinecone adaptively applies the best indexing approach in each case. As Zhu put it, “We don’t want our customers to have to choose between algorithms or wonder about recall. We handle that under the hood.” Variance among creators Not every Digital Mind looks the same. Some creators upload relatively small datasets — social media feeds, essays, or course materials — amounting to tens of thousands of words. Others go far deeper. Spelsberg described one expert who contributed hundreds of gigabytes of scanned PDFs, spanning decades of marketing knowledge. Despite this variance, Pinecone’s serverless architecture has allowed Delphi to scale beyond 100 million stored vectors across 12,000+ namespaces without hitting scaling cliffs. Retrieval remains consistent, even during spikes triggered by live events or content drops. Delphi now sustains about 20 queries per second globally, supporting concurrent conversations across time zones with zero scaling incidents. Toward a million digital minds Delphi’s ambition is to host millions of Digital Minds, a goal that would require supporting at least five million namespaces in a single index. For Spelsberg, that scale is not hypothetical but part of the product roadmap. “We’ve already moved from a seed-stage idea to a system managing 100 million vectors,” he said. “The reliability and performance we’ve seen gives us confidence to scale aggressively.” Zhu agreed, noting that Pinecone’s architecture was specifically designed to handle bursty, multi-tenant workloads like Delphi’s. “Agentic applications like these can’t be built on infrastructure that cracks under scale,” he said. Why RAG still matters and will for the foreseeable future As context windows in large language models expand, some in the AI industry have suggested RAG may become obsolete. Both Spelsberg and Zhu push back on that idea. “Even if we have billion-token context windows, RAG will still be important,” Spelsberg said. “You always want to surface the most relevant information. Otherwise you’re wasting money, increasing latency, and distracting the model.” Zhu framed it in terms of context engineering — a term Pinecone has recently used in its own technical blog posts. “LLMs are powerful reasoning tools, but they need constraints,” he explained. “Dumping in everything you have is inefficient and can lead to worse outcomes. Organizing and narrowing context isn’t just cheaper—it improves accuracy.” As covered in Pinecone’s own writings on context engineering, retrieval helps manage the finite attention span of language models by curating the right mix of user queries, prior messages, documents, and memories to keep interactions coherent over time. Without this, windows fill up, and models

How AI ‘digital minds’ startup Delphi stopped drowning in user data and scaled up with Pinecone Read More »

Enterprise Claude gets admin, compliance tools—just not unlimited usage

August 20, 2025 4:28 PM Credit: VentureBeat, generated with MidJourney Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A few weeks after announcing rate limits for Claude and the popular Claude Code, Anthropic will offer Claude Enterprise and Teams customers upgrades to access more usage and Claude Code in a single subscription.  The upgrades will also include more admin controls and a new Compliance API that will give enterprises “access to usage data and customer content for better observability, auditing and governance.” Anthropic said in a post that with a single subscription to Claude and Claude Code, users “can move seamlessly between ideation and implementation, while admins get the visibility and controls they need to scale Claude across their organization.”  Claude Code is now available on Team and Enterprise plans. Flexible pricing lets you mix standard and premium Claude Code seats across your organization and scale with usage. pic.twitter.com/co3UT5PcP3 — Claude (@claudeai) August 20, 2025 The premium seats, separate from the standard seats that most everyone in the organization receives, can be used with both Claude and Claude Code. Admins can assign individuals premium seats based on their role in the organization.  AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage Architecting efficient inference for real throughput gains Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO Throttled rates Anthropic’s announcement of additional usage for Enterprise and Teams users sparked criticism, with critics demanding that the company remove rate limits for Claude.  The company said rate limits, which will begin on August 28, would free up space for more projects and deter people who “abuse” the system by overusing Claude Code. Yea but what about getting rid of throttling?… — Cyb3rEchos (@Cyb3rEchos) August 20, 2025 Big news! The premium seats sound promising—especially with more Claude Code access. Does the new Claude Code upgrade also affect API rate limits or make it easier to set up custom integrations for teams? — Ben✨ (@_BenResearch) August 20, 2025 Extra usage for all users pls.. Feels like we’ve been at 45/5hr for a lifetime. — Artificially Inclined™ (@Art_If_Ficial) August 20, 2025 In an email, Anthropic told VentureBeat that the existing five-hour usage limits still stand for premium seats on Enterprise and Team, the same as for users of Max 5x. “Now with the new Claude Code bundle, both standard (Claude.ai access) and premium (Claude.ai + Claude Code access) seats have the option for extra usage and admins have robust seat management controls so that power users can continue their workflows with Claude however they need,” Anthropic said through a spokesperson.  Admin and compliance control The draw for the upgrades, Anthropic said, revolves around the additional controls and enterprise-ready features.  “While individual Max plans work for personal use, the Enterprise bundle provides the security, compliance, analytics, and management capabilities that organizations need at scale,” the company said.  Anthropic noted enterprise customers often have to choose between speed and governance, so bringing in admin controls and compliance features “solves that tradeoff by letting teams move seamlessly between planning in Claude and building in the terminal using Claude Code.” It also consolidates expenses using Claude Code from individual accounts to the broader enterprise.  Enterprise IT admins will be able to manage seats, including buying and allocating the seats, set spending controls and view Claude Code analytics in Claude, including knowing which lines of code were accepted and usage patterns. They can also set tool permissioning, policy settings and MCP configurations.  Since the number of seats will be based on the number of premium or standard seats the enterprises need, Anthropic said it will offer flexible pricing.  The Compliance API enables companies, particularly those in regulated sectors, to access usage data and customer content on Claude for monitoring and policy enforcement. The API allows organizations to bring Claude data into their compliance and orchestration dashboards. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. Read More

Enterprise Claude gets admin, compliance tools—just not unlimited usage Read More »

TikTok parent company ByteDance releases new open source Seed-OSS-36B model with 512K token context

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The company’s Seed Team of AI researchers today released Seed-OSS-36B on AI code sharing website Hugging Face. Seed-OSS-36B is new line of open source, large language models (LLM) designed for advanced reasoning, and developer-focused usability with a longer token context — that is, how much information the models can accept as inputs and then output in a single exchange — than many competing LLMs from U.S. tech companies, even leaders such as OpenAI and Anthropic. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage Architecting efficient inference for real throughput gains Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO Seed-OSS-36B-Base with synthetic data Seed-OSS-36B-Base without synthetic data Seed-OSS-36B-Instruct In releasing both synthetic and non-synthetic versions of the Seed-OSS-36B-Base model, the Seed Team sought to balance practical performance with research flexibility. The synthetic-data variant, trained with additional instruction data, consistently delivers stronger scores on standard benchmarks and is intended as a higher-performing general-purpose option. The non-synthetic model, by contrast, omits these augmentations, creating a cleaner foundation that avoids potential bias or distortion introduced by synthetic instruction data. By providing both, the team gives applied users access to improved results while ensuring researchers retain a neutral baseline for studying post-training methods. Meanwhile, the Seed-OSS-36B-Instruct model differs in that it is post-trained with instruction data to prioritize task execution and instruction following, rather than serving purely as a foundation model. All three models are released under the Apache-2.0 license, allowing free use, modification, and redistribution by researchers and developers working for enterprises. That means they can be used to power commercial applications, internal to a company or external/customer-facing, without paying ByteDance any licensing fees or for application programming interface (API) usage. This continues the summer 2025 trend of Chinese companies shipping powerful open source models with OpenAI attempting to catch up with its own open source gpt-oss duet released earlier this month. The Seed Team positions Seed-OSS for international applications, emphasizing versatility across reasoning, agent-like task execution, and multilingual settings. The Seed Team, formed in 2023, has concentrated on building foundation models that can serve both research and applied use cases. Design and core features The architecture behind Seed-OSS-36B combines familiar design choices such as causal language modeling, grouped query attention, SwiGLU activation, RMSNorm, and RoPE positional encoding. Each model carries 36 billion parameters across 64 layers and supports a vocabulary of 155,000 tokens. One of the defining features is its native long-context capability, with a maximum length of 512,000 tokens, designed to process extended documents and reasoning chains without performance loss. That’s twice the length of OpenAI’s new GPT-5 model family and is roughly equivalent to about 1,600 pages of text, the length of a Christian Bible. Another distinguishing element is the introduction of a thinking budget, which lets developers specify how much reasoning the model should perform before delivering an answer. It’s something we’ve seen from other recent open source models as well, including Nvidia’s new Nemotron-Nano-9B-v2, also available on Hugging Face. In practice, this means teams can tune performance depending on the complexity of the task and the efficiency requirements of deployment. Budgets are recommended in multiples of 512 tokens, with 0 providing a direct response mode/ Competitive performance on third-party benchmarks Benchmarks published with the release position Seed-OSS-36B among the stronger large open-source models. The Instruct variant, in particular, posts state-of-the-art results in multiple areas. Math and reasoning: Seed-OSS-36B-Instruct achieves 91.7 percent on AIME24 and 65 on BeyondAIME, both representing open-source “state-of-the-art” (SOTA). Coding: On LiveCodeBench v6, the Instruct model records 67.4, another SOTA score. Long-context handling: On RULER at 128K context length, it reaches 94.6, marking the highest open-source result reported. Base model performance: The synthetic-data Base variant delivers 65.1 on MMLU-Pro and 81.7 on MATH, both state-of-the-art results in their categories. The no-synthetic Base version, while slightly behind on many measures, proves competitive in its own right. It outperforms its synthetic counterpart on GPQA-D, providing researchers with a cleaner, instruction-free baseline for experimentation. For enterprises comparing open options, these results suggest Seed-OSS offers strong potential across math-heavy, coding, and long-context workloads while still providing flexibility for research use cases. Access and deployment Beyond performance, the Seed Team highlights accessibility for developers and practitioners. The models can be deployed using Hugging Face Transformers, with quantization support in both 4-bit and 8-bit formats to reduce memory requirements. They also integrate with vLLM for scalable serving, including configuration examples and API server instructions. To lower barriers further, the team includes scripts for inference, prompt customization, and tool integration. For technical leaders managing small teams or working under budget constraints, these provisions are positioned to make experimentation with 36-billion-parameter models more approachable. Licensing and considerations for enterprise decision-makers With the models offered under Apache-2.0, organizations can adopt them without restrictive licensing terms, an important factor for teams balancing legal and operational concerns. For decision makers evaluating the open-source landscape, the release brings three takeaways: State-of-the-art benchmarks across math, coding, and long-context reasoning. A balance between higher-performing synthetic-trained models and clean research baselines. Accessibility features that lower operational overhead for lean engineering teams. By placing strong performance and flexible deployment under an open license, ByteDance’s Seed Team has added new options for enterprises, researchers, and developers alike. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. Read More

TikTok parent company ByteDance releases new open source Seed-OSS-36B model with 512K token context Read More »

Scroll to Top