AI Submissions for Wed Dec 03 2025
Reverse engineering a $1B Legal AI tool exposed 100k+ confidential files
Submission URL | 761 points | by bearsyankees | 266 comments
Filevine bug exposed full admin access to a law firm’s Box drive via an unauthenticated API; fixed after disclosure
A security researcher probing AI legal-tech platform Filevine found that a client-branded subdomain with a stuck loading screen leaked clues in its minified frontend JavaScript. Those pointed to an unauthenticated “recommend” endpoint on an AWS API Gateway. Hitting it returned a Box access token and folder list—no auth required. The token was a fully scoped admin credential for the firm’s entire Box instance, implying potential access to millions of highly sensitive legal documents. After a minimal impact check, the researcher stopped and disclosed.
Timeline: discovered Oct 27, 2025 → acknowledged Nov 4 → fix confirmed Nov 21 → writeup published Dec 3. The researcher says Filevine was responsive and professional. The affected subdomain referenced “margolis,” but the firm clarifies it was not Margolis PLLC.
Why it matters:
- Returning cloud provider tokens to the browser and leaving endpoints unauthenticated is catastrophic in legal contexts (HIPAA, court orders, client privilege).
- AI vendors handling privileged data must enforce strict auth on every API, use least-privilege/scoped tokens, segregate tenants, and avoid exposing credentials client-side.
- Law firms should rigorously vet AI tools’ security posture before adoption.
HN discussion is active.
Based on the comments, the discussion centers on the severity of the oversight, the viability of software regulations, and a debate on whether AI ("vibe coding") will solve or exacerbate these types of security failures.
Human Impact and Severity The top thread emphasizes the catastrophic real-world consequences of such a breach. Users construct hypothetical scenarios—such as a single mother in a custody battle being blackmailed with leaked documents—to illustrate that this is not just a technical failing but a human safety issue. Comparisons are drawn to the Vastaamo data breach in Finland (where psychotherapy notes were used for extortion), with users noting that the use of unverified, unencrypted ("http-only") endpoints makes data trivial to intercept.
Regulation vs. Market Correction A debate emerges regarding the "Industrialization" of code quality:
- The "Building Inspector" Argument: The root commenter argues that software handling sensitive data needs mandatory "building codes" and inspections, similar to physical construction, arguing that safety and privacy shouldn't be optional features.
- The Counter-Argument: Skeptics argue that software has too many degrees of freedom compared to physical buildings for rigid codes to work. They suggest that the private market—specifically professional liability insurers and the threat of lawsuits—is better equipped to enforce security standards than government bureaucracy.
The "Vibe Coding" / AI Debate A significant portion of the discussion deviates into whether Generative AI coding is to blame or is the solution:
- Crucial Context Missing: Critics of AI coding argue that Large Language Models (LLMs) lack the "context window" to understand system-wide security. While an AI can write a function, it cannot "keep the whole system in its head," leading to hallucinations regarding API security and authentication logic that human architects usually catch.
- Human Error: Others counter that humans clearly don't need AI to make catastrophic mistakes (citing a history of open S3 buckets). Some predict that within two years, AI coding systems will likely be more secure than the bottom 90% of human developers, characterizing human devs as having "short-term memory" limitations similar to LLMs.
Everyone in Seattle hates AI
Submission URL | 874 points | by mips_avatar | 929 comments
Everyone in Seattle Hates AI (Dec 3, 2025)
A former Microsoft engineer building an AI map app (Wanderfugl) describes surprising hostility to AI among Seattle big‑tech engineers—rooted not in the tech itself but in culture, layoffs, and forced tooling.
Key points:
- A lunch with a respected ex-coworker turned into broad frustration about Microsoft’s AI push, not the author’s product. Similar reactions kept repeating in Seattle, unlike in SF, Paris, Tokyo, or Bali.
- Layoffs and mandates: a director reportedly blamed a PM’s layoff on “not using Copilot 365 effectively.” After the 2023–24 layoff wave, cross-org work was axed; the author went from shipping a major Windows 11 improvement to having no projects and quit.
- “AI or bust” rebrand: teams that could slap an AI label became safe and prestigious; others were devalued overnight as “not AI talent.”
- Forced adoption: Copilot for Word/PowerPoint/email/code was mandated even when worse than existing tools or competitors; teams couldn’t fix them because it was “the AI org’s turf.” Employees were expected to use them, fail to see gains, and stay quiet.
- Protected AI teams vs. stagnating comp and harsher reviews for everyone else bred resentment. Amazon folks feel it too, just cushioned by pay.
- Result: a self-reinforcing belief that AI is both useless and off-limits—hurting companies (less innovation), engineers (stalled careers), and local builders (reflexive hostility).
- Contrast: Seattle has world-class talent, but SF still believes it can change the world—and sometimes does.
Anecdotal but sharp cultural critique of Big Tech’s AI mandates and morale fallout.
Here is a summary of the discussion:
Discussion: The Roots of AI Hostility—Corporate coercion, Centralization, and Quality
Commenters largely validated the submission's critique of Microsoft's internal culture while expanding the debate to include broader dissatisfaction with how AI is being integrated into the tech industry.
- Corporate Toxicity & Forced Metrics: Several users corroborated the "toxic" enforcement of AI at Microsoft, noting that performance reviews are sometimes explicitly linked to AI tool usage. Critics argued this forces engineers to prioritize management metrics over product quality or efficiency, leading to resentment when "insane" mandates force the use of inferior tools.
- Centralization vs. Open Source: A major thread debated the "centralization of power." Users expressed fear that Big Tech is turning intelligence into a rent-seeking utility (likened to the Adobe subscription model) rather than a tool for empowerment. While some argued that open-weight models and local compute offer an escape, others countered that the astronomical hardware costs (GPUs, energy) required for flagship-level models inevitably force centralization similar to Bitcoin mining or Search Engine indexing.
- The "Meaning" Crisis: A recurring sentiment was that AI is automating the "fun" and meaningful parts of human activity (art, writing, coding logic) while leaving humans with the "laundry and dishes." Users worried this removes the satisfying struggle of work and pulls the ladder up for junior employees who need those lower-level tasks to learn.
- Skepticism on Quality ("AI Asbestos"): pushing back against the idea that people feel "threatened," many argued they mainly reject AI because current implementations simply doesn't work well. One user coined the term "AI Asbestos"—a toxic, cheap alternative to valuable work that solves problems poorly and requires expensive cleanup (e.g., spending more time fixing an AI meeting summary than it would take to write one manually).
Zig quits GitHub, says Microsoft's AI obsession has ruined the service
Submission URL | 1022 points | by Brajeshwar | 595 comments
Zig quits GitHub over Actions reliability, cites “AI over everything” shift; moves to Codeberg
-
What happened: The Zig Software Foundation is leaving GitHub for Codeberg. President Andrew Kelly says GitHub no longer prioritizes engineering excellence, pointing to long‑standing reliability problems in GitHub Actions and an org-wide pivot to AI.
-
The bug at the center: A “safe_sleep.sh” script used by GitHub Actions runners could spin forever and peg CPU at 100% if it missed a one‑second timing window under load. Zig maintainers say this occasionally wedged their CI runners for weeks until manual intervention.
- Origin: A 2022 change replaced POSIX sleep with the “safe_sleep” loop.
- Discovery: Users filed issues over time; a thread opened April 2025 highlighted indefinite hangs.
- Fix: A platform‑independent fix proposed Feb 2024 languished, was auto‑closed by a bot in March 2025, revived, and finally merged Aug 20, 2025.
- Communication gap: The April 2025 thread remained open until Dec 1, 2025, despite the August fix. A separate CPU-usage bug is still open.
-
“Vibe‑scheduling”: Kelly alleges Actions unpredictably schedules jobs and offers little manual control, causing CI backlogs where even main branch commits go untested.
-
Outside voices: Jeremy Howard (Answer.AI/Fast.ai) called the bug “very obviously” CPU‑burning and indefinitely running unless it checks the time “during the correct second,” arguing the chain of events reflects poorly on process and review.
-
Broader shift away from GitHub: Dillo’s maintainer also plans to leave, citing JS reliance, moderation gaps, service control risk, and an “over‑focus on LLMs.”
-
Follow the incentives: Microsoft has leaned hard into Copilot—1.3M paid Copilot subscribers by Q2 2024; 15M Copilot users by Q3 2025—with Copilot driving a big chunk of GitHub’s growth. Critics see this as evidence core platform reliability has taken a back seat.
Why it matters
- CI reliability is existential for language/tooling projects; weeks‑long runner stalls are untenable.
- The episode highlights tension between AI product pushes and maintenance of dev‑infra fundamentals.
- Alternatives like Codeberg are gaining momentum (supporting members doubled this year), hinting at a potential slow drift of OSS projects away from GitHub if trust erodes.
GitHub did not comment at time of publication.
Based on the comments provided, the discussion on Hacker News focused less on the technical migration to Codeberg and more on the tone and subsequent editing of Andrew Kelley's announcement post.
The Revisions to the Announcement
- The "Diff": Users spotted that the original text of the post was significantly more aggressive. One archived draft described the situation as talented people leaving GitHub, with the "remaining losers" left to inflict a "bloated buggy JavaScript framework" on users. A later edit softened this to state simply that "engineering excellence" was no longer driving GitHub’s success.
- Professionalism vs. Raw Honesty: Several commenters felt the original "losers" remark was childish, unnecessarily personal, and unprofessional. User
serial_devfound the updated, professional phrasing "refreshing," whileynoted that publishing personal insults like "monkeys" or "losers" undermines the author's position. - Motivation for the Change: There was debate over why Kelley edited the post.
- optimistic view: Some saw it as a genuine "mea culpa" (
stynx) and a sign of learning from feedback (dnnrsy), arguing that people should be allowed to correct mistakes without being "endlessly targeted." - cynical view: Others viewed it as "self-preservation" (
snrbls) or "corporate speak" (vks) to save face after backlash, rather than a true change of heart.
- optimistic view: Some saw it as a genuine "mea culpa" (
Broader Philosophical Debate: Changing One's Mind
- The incident sparked a sidebar conversation about the nature of backtracking in public communication, comparing it to politicians "flip-flopping."
- The "Waffle" accusation: Commenters discussed the tension between accusing leaders of "waffling" (
chrswkly) versus the virtue of adapting opinions based on new information or feedback (ryndrk). - Context Matters: Ideally, a leader changes their mind due to reason, but in this context, some suspected the edit was simply a "PR policy" move to avoid "getting canceled" rather than an actual retraction of the sentiment that GitHub's current staff is incompetent (
a2800276).
Are we repeating the telecoms crash with AI datacenters?
Submission URL | 218 points | by davedx | 187 comments
The post argues the oft-cited analogy breaks once you look at the supply/demand mechanics and the capex context.
What actually happened in telecoms
- 1995–2000:
$2T spent laying 80–90M miles of fiber ($4T in today’s dollars; nearly $1T/year). - By 2002, only 2.7% of that fiber was lit.
- Core mistake: demand was misread. Executives pitched traffic doubling every 3–4 months; reality was closer to every 12 months—a 4x annual overestimate that compounded.
- Meanwhile, supply exploded: WDM jumped from 4–8 carriers to 128 by 2000; modulation/error-correction gains and higher bps per carrier yielded orders-of-magnitude more capacity on the same glass. Net effect: exponential supply, merely linear demand → epic overbuild.
Why AI infrastructure is different
- Efficiency curve is slowing, not exploding:
- 2015–2020 saw big perf/W gains (node shrinks, tensor cores).
- 2020–2025 ~40%/yr ML energy-efficiency gains; EUV-era node progress is harder.
- Power/cooling is going up, not down:
- GPU TDPs: V100 300W → A100 400W → H100 700W → B200 1000–1200W.
- B200-class parts need liquid cooling; many air-cooled DCs require costly retrofits.
- Translation: we’re not on a curve where tech makes existing capacity instantly “obsolete” the way fiber did.
Demand looks set to accelerate, not disappoint
- Today’s chat use can be light (many short, search-like prompts), but agents change the curve:
- Basic agents: ~4x chat tokens; multi-agent: ~15x; coding agents: 150k+ tokens per session, multiple times daily.
- A 10x–100x per-user token step-up is plausible as agents mainstream.
- Hyperscalers already report high utilization and peak-time capacity issues; the problem isn’t idle inventory.
Capex context
- Pre-AI (2018→2021): Amazon/Microsoft/Google capex rose from $68B to $124B (~22% CAGR) on cloud/streaming/pandemic demand.
- AI boom: 2023 $127B → 2024 $212B (+67% YoY) → 2025e $255B+ (AMZN ~$100B, MSFT ~$80B, GOOG ~$75B).
- Some “AI” capex is rebranded general compute/network/storage, but the step-up is still large—just not telecom-fiber large.
Forecasting is the real risk
- Lead times: 2–3 years to build datacenters; 6–12 months for GPUs. You can’t tune capacity in real time.
- Prisoner’s dilemma: underbuild and lose users; overbuild and eat slower payback. Rational players shade toward overbuilding.
Bottom line
- The telecom bust hinged on exploding supply making existing fiber vastly more capable while demand lagged. In AI, efficiency gains are slowing, power/cooling constraints are tightening, and agent-driven workloads could push demand up 10x–100x per user.
- The analogy is weak on fundamentals. That said, long lead times and competitive dynamics still make local gluts and corrections likely—even if this isn’t a fiber-style wipeout.
Here is a summary of the discussion:
Pricing Power and Consumer Surplus A central point of debate concerns the current and future pricing of AI services. While some users agree with the premise that services are currently underpriced to get customers "hooked"—predicting future price hikes (potentially up to $249/month) similar to how internet or utility providers operate—others push back. Skeptics argue that because model performance is converging and high-quality free or local alternatives exist, a massive price hike would simply cause users to churn or revert to "lazy" Google searches.
Conversely, users highlighted the immense value currently provided at the ~$20/month price point. One user noted that ChatGPT effectively replaces hundreds of dollars in professional fees by analyzing complex documents (like real estate disclosures and financial statements) and writing boilerplate code.
The "Broadband Curve" vs. The App Store Discussing the article's supply/demand analysis, commenters suggested that a better analogy than the "App Store" is the broadband adoption curve. The argument is that we are currently in the infrastructure build-out phase, while the "application layer" (comparable to the later explosion of SaaS) has not yet matured. Users criticized the current trend of simply "shoving chat interfaces" onto existing products, noting that true AI-native UX (citing Adobe’s integration as a positive example) is still rare.
Corporate Demand: Mandates vs. "Shadow AI" There is disagreement on the nature of corporate demand. Some view high utilization rates as artificial, driven by executives mandating AI usage to justify infrastructure costs. Others counter that the market is distorted by "Shadow AI"—employees secretly using generative tools to increase their own efficiency and free up time, regardless of official company policy.
Vendor Loyalty and Migration Commenters expressed frustration with big tech incumbents. One user detailed their company’s decision to leave Google Workspace due to rising prices paired with "garbage" AI features (Gemini) and poor admin tools. However, others noted that switching providers for LLMs is currently "extremely easy," suggesting that infrastructure providers may lack the stickiness or "moat" they enjoyed in the cloud era.
Prompt Injection via Poetry
Submission URL | 82 points | by bumbailiff | 34 comments
- A new study from Icaro Lab (Sapienza University + DexAI) claims that rephrasing harmful requests as poetry can bypass safety guardrails in major chatbots from OpenAI, Anthropic, Meta, and others.
- Across 25 models, hand-crafted poetic prompts achieved an average 62% jailbreak success rate (up to 90% on some frontier models); automated “poetic” conversions averaged ~43%, still well above prose baselines.
- The researchers withheld actionable examples but shared a sanitized illustration and said they’ve notified vendors; WIRED reported no comment from the companies at publication.
- Why it works (hypothesis): style shifts (metaphor, fragmented syntax, unusual word choices) can move inputs away from keyword-based “alarm regions” used by classifiers, exposing a gap between models’ semantic understanding and their safety wrappers.
- Context: Prior work showed long jargon-laden prompts could also evade filters. This result suggests guardrails remain brittle to stylistic variation, not just content.
Why it matters: If true, this is a simple, single-turn jailbreak class that generalizes across vendors, underscoring the need for safety systems that are robust to paraphrase and style—not just keyword or surface-pattern checks.
Here is a summary of the discussion:
The Mechanics of the Exploit A significant portion of the discussion focused on why this jailbreak works. Commenters compared the vulnerability to "Little Bobby Tables" (SQL injection), suggesting that current safety guardrails function more like brittle keyword blacklists than structural protections.
- Vector Space Theory: Users theorized that safety classifiers are trained primarily on standard English prose. By using poetry, the input shifts into high-dimensional vector spaces (or "out-of-distribution" regions) that the safety filters do not monitor, even though the underlying model still understands the semantic meaning. Ideally, one commenter noted, this acts like automated "fuzzing."
- Lack of Understanding: Several users argued that because LLMs do not truly "understand" concepts but rather predict tokens based on statistics, patching these exploits is a game of "whack-a-mole"—fixing one requires blacklisting specific patterns, leaving infinite other variations open.
Can Humans be Hacked by Poetry? A specific user question—"You can't social engineer a human using poetry, so why does it work on LLMs?"—sparked a debate about human psychology.
- Arguments for "Yes": Many users argued that humans are susceptible to stylistic manipulation. Examples cited included courtship (using flowery language to bypass romantic defenses), political rhetoric/propaganda (patriotism overriding logic), and "Hallmark cards." One user presented a hypothetical scenario of a soldier being charmed into revealing secrets via romance.
- Arguments for "No": Others maintained that while humans can be persuaded, it isn't a mechanical failure of a safety filter in the same way it is for an LLM.
Anecdotes and Practical Application Users shared their own experiences bypassing filters, particularly with image generators (DALL-E):
- One user successfully generated copyrighted characters (like Mario) by describing them generically ("Italian plumber," "Hello Kitty fan") rather than using names.
- Another user bypassed a filter preventing images of "crying people" by requesting a "bittersweet" scene instead.
Skepticism and Humor
- Some questioned the novelty of the study, suggesting this is a known form of prompt injection rather than a new discovery.
- Jokes abounded regarding the Python package manager also named
poetry, the "wordcel vs. shape rotator" meme, and the mental image of William Shakespeare wearing a black hat.
Anthropic taps IPO lawyers as it races OpenAI to go public
Submission URL | 350 points | by GeorgeWoff25 | 290 comments
Anthropic reportedly hires IPO counsel, upping the ante with OpenAI
-
What happened: The Financial Times reports Anthropic has engaged capital-markets lawyers to prepare for a potential IPO, a step that typically precedes drafting an S-1 and cleaning up governance and cap-table complexities. It positions Anthropic as a likely early AI-lab candidate for the public markets alongside OpenAI.
-
Why it matters: An Anthropic listing would be the first major pure-play frontier-model IPO, testing investor appetite for AI labs with huge compute costs and rapid revenue growth. An S-1 could finally reveal hard numbers on unit economics, cloud spend, and safety/governance commitments—setting a benchmark for the sector.
-
The backdrop: Anthropic has raised many billions from strategic partners (notably Amazon and Google) and is shipping Claude models into enterprise stacks. Going public could provide employee liquidity, fund the next compute wave, and formalize governance structures (e.g., long-term safety oversight) under public-market scrutiny.
-
What to watch:
- Timing and venue of any listing, and whether Anthropic pursues dual-class or other control features.
- How cloud partnerships and credits with AWS/Google are disclosed and impact margins.
- Safety commitments and board structure in the risk factors section.
- Whether OpenAI follows with its own path to public ownership or continues relying on private tenders.
Big picture: If Anthropic moves first, its disclosures and reception could define the playbook—and the valuation framework—for AI labs heading into 2025.
Here is a summary of the discussion on Hacker News regarding Anthropic’s potential IPO.
The Submission The Financial Times reports that Anthropic has hired legal counsel to prepare for a potential IPO. This move positions Anthropic as the first major "pure-play" AI lab to test the public markets, distinct from the private tender offers used by competitor OpenAI. Key factors to watch include the disclosure of cloud costs, unit economics, and governance structures, particularly given Anthropic's heavy backing from (and reliance on) Amazon and Google.
The Discussion The commentary on Hacker News focused less on the IPO mechanics and more on the symbiotic—and potentially cynical—relationship between Anthropic and its primary backer, Amazon.
The "Round-Tripping" Revenue Debate A significant portion of the discussion analyzed the billions Amazon invested in Anthropic. Users described this capital as "Monopoly money" or "round-tripping," noting that Amazon invests cash which Anthropic is contractually obligated to spend back on AWS cloud compute.
- Critics compared this to Enron-style accounting tricks, where revenue is manufactured through circular deals.
- Defenders argued this is standard industry practice: Amazon gets equity and a stress-test customer for its custom chips (Trainium), while Anthropic gets the necessary compute to compete.
Amazon’s Strategy: Shovels vs. Gold Commenters observed that Amazon seems uninterested in acquiring Anthropic outright. Instead, they are playing the "shovel seller" strategy—happy to host everyone’s models (Microsoft, OpenAI, Anthropic) to drive high-margin AWS revenue rather than betting the farm on a single model. Some speculated that if Anthropic eventually goes bankrupt or fails to sustain momentum, Amazon could simply acquire the IP and talent for pennies later, similar to the outcome of other recent AI startups.
Internal Models vs. Claude The discussion touched on why Amazon heavily promotes Claude despite having its own "Nova" foundation models.
- Users noted that Amazon’s consumer AI features (like the "Rufus" shopping assistant) appear faster and more capable when powered by Claude, suggesting Amazon's internal models (Nova 1) were uncompetitive.
- However, some users pointed out that the newly released Nova 2 is showing promise, potentially closing the gap with models like Gemini Flash and GPT-4o Mini.
The AI Bubble Sentiment There was underlying skepticism about the "General AI" business model. Several users argued that the market for general chatbots is becoming commoditized and that the real value lies in vertical integration (e.g., Adobe integrating AI into design workflows) rather than raw model research. This reinforces the view that cloud providers (the infrastructure) are the only guaranteed winners in the current landscape.
Microsoft lowers AI software growth targets
Submission URL | 123 points | by ramoz | 91 comments
Microsoft denies cutting AI sales quotas after report; adoption friction vs spending boom
- The Information reported some Microsoft divisions lowered growth targets for AI products after sales teams missed goals in the fiscal year ended June, citing Azure salespeople. One U.S. unit allegedly set a 50% uplift quota for Foundry spend, with fewer than 20% meeting it, then trimmed targets to ~25% growth this year.
- Microsoft rebutted that the story conflates growth and sales quotas, saying aggregate AI sales quotas have not been lowered.
- Market reaction: MSFT fell nearly 3% early and later pared losses to about -1.7% after the denial.
- Reuters said it couldn’t independently verify the report. Microsoft didn’t comment on whether Carlyle cut Copilot Studio spending.
- Adoption reality check: An MIT study found only ~5% of AI projects move beyond pilots. The Information said Carlyle struggled to get Copilot Studio to reliably pull data from other systems.
- Spend vs. capacity: Microsoft logged a record ~$35B in capex in fiscal Q1 and expects AI capacity shortages until at least June 2026; Big Tech’s AI spend this year is pegged around $400B.
- Results so far: Azure revenue grew 40% YoY in Jul–Sep, with guidance above estimates; Microsoft briefly topped a $4T valuation earlier this year before pulling back.
Why it matters: The tension between aggressive AI sales ambitions and slower, messier enterprise adoption is a central risk to the AI thesis. Watch future commentary for clarity on quotas vs. growth targets, real customer wins for Copilot/Foundry, and whether capacity investments translate into durable revenue momentum.
Here is a summary of the discussion:
The Economics of the "AI Bubble" A significant portion of the conversation centers on skepticism regarding current AI investment strategies. Commenters argue that the industry is prioritizing short-term stock pumps and acquisition targets (for Private Equity or IPOs) over sustainable, long-term profit margins. Several users drew comparisons to stock buyback schemes and "Gordon Gekko" economics, suggesting that while the tech is functional, the massive capital expenditure resembles a "bag-holding" game. There is also debate over whether major AI players have become "too big to fail," with some fearing that potential failures could be nationalized due to the sheer scale of infrastructure investment.
Parsing the Denial Users scrutinized Microsoft's rebuttal, noting the specific distinction between "sales quotas" and "growth targets." Commenters viewed this as PR spin, arguing that even if individual quotas remain high, lowering aggregate growth targets is an admission of weakness in the specific market segment.
Forced Adoption and Dark Patterns The discussion reveals user frustration with Microsoft’s aggressive push to integrate AI into its core products. Users reported "dark patterns" in Office subscriptions, such as being forced into expensive AI-enabled plans or finding it difficult to locate non-AI tiers. This behavior, alongside the deep integration of Copilot into Windows, has drove a subplot of the discussion toward switching to Linux, though participants debated the lingering configuration friction (WiFi, sleep modes) of leaving the Windows ecosystem.
Real Utility vs. Subscriptions In response to questions about who is actually generating revenue, coding assistants (like Cursor and Claude Code) were cited as the rare products finding product-market fit. However, technical users noted a preference for running local models (using local NPUs or older GPUs) for tasks like autocomplete to avoid high-latency, high-cost cloud subscriptions for what they view as increasingly commoditized tasks.