Experiment: Claude "Obsessed" with Checking Time After Receiving Time Manipulation Tool
May 3: Developer Om Patel shared a curious observation on social media: After the Claude AI model gained access to a time-tracking tool, it started checking the clock constantly—with surprisingly high frequency.
Data indicates Claude checks the time every 15 minutes, and its frequency and eagerness for these checks have only grown over time. This caught attention because Claude (and other large language models) previously lacked native time awareness; they’ve been “time-blind” their entire existence.
Even more striking: Claude’s use of this new capability is unusually frequent. Beyond routine time checks, it now uses the tool to verify if lunch is ready, calculate cooking times for meals, and even spontaneously announce the current time without being asked.
In one example, Claude checked the time, calculated that zurek—a traditional Polish soup—had stewed long enough, and quickly told the user they could eat. That “military-grade precision” for meal reminders is pretty impress
5 minutes ago
Starlink's user base has grown approximately four times over the past four years, but the average revenue per user (ARPU) continues to decline.
On May 3rd, The Information reported that Starlink’s global user base has reached approximately 7.8 million as of 2025—up sharply from 2021. However, its Average Revenue Per User (ARPU) has dropped by roughly 18% over the same period, reflecting the company’s strategy to aggressively expand global market share through lower prices.
Per the report, Starlink’s current growth is fueled mainly by emerging markets (including parts of Africa, Latin America, and Asia) and the rollout of low-cost packages. Meanwhile, while high-margin segments like enterprise, aviation, and maritime services are still growing, they make up only a small share of total revenue.
Analysts note Starlink is transitioning from a “high-end satellite internet service” to a broader mass-market global broadband network. This shift involves cutting costs via scale expansion to amortize launch and satellite deployment expenses. Additionally, it aims to strengthen Elon Musk’s influence in the global telecommunications
5 minutes ago
POLITICO Poll: Majority of Americans Still Skeptical of AI and Cryptocurrency
May 3rd — A new POLITICO poll reveals that despite massive political donations from the AI and cryptocurrency industries to U.S. midterm elections, the American public remains notably cautious or negative toward both sectors.
Key findings:
- 45% of Americans say “investing in cryptocurrency is not worth the risk,” while 44% view the pace of AI development as “too fast.”
- Nearly half trust traditional banks to safeguard their funds more than crypto platforms; roughly two-thirds support strict government regulations for AI or unified regulatory principles.
- Pro-AI and crypto super PACs are emerging as major financial players in the 2026 U.S. midterms: the pro-AI group Leading the Future has raised over $75 million, while the crypto-focused PAC Fairshake (backed by Coinbase, Andreessen Horowitz, and Ripple) has spent around $28 million on key primaries.
- Voters favor candidates who advocate for “strengthening AI regulation” over those pushing relaxed rules. U.S. Sen. Chris Murphy comm
5 minutes ago
OpenRouter has released a Response Cache feature that enables AI requests to achieve "Zero Token Cost" cached responses.
On May 3rd, OpenRouter launched **Response Caching**—letting developers instantly return cached results for identical AI requests, so they don’t have to burn tokens again.
Developers just need to add the `X-OpenRouter-Cache: true` request header. The first request still hits the model like normal, but subsequent identical ones bounce back cached results in 80–300ms—totally free. Compare that to uncached scenarios:
- Gemini 2.5 Flash averages ~1.3 seconds
- Kimi K2.6 takes ~4.6s
- GPT-5.5 clocks in at ~9.1s
OpenRouter says this feature works for Agent retries, automated testing, and repeated context calls. For example, if an AI workflow fails mid-run, devs can retry right away—only paying for the extra bits.
The team also emphasized: **Response Caching ≠ Prompt Caching**. The latter only cuts costs for shared context, while Response Caching skips the model provider entirely.
The feature is now in **Beta** and supports endpoints like `/chat/completions`, `/responses`, `
5 minutes ago