dataviz1000 a day ago

I use Playwright to intercept all requests and responses and have Claude Code navigate to a website like YouTube and click and interact with all the elements and inputs while recording all the requests and responses associated with each interaction. Then it creates a detailed strongly typed API to interact with any website using the underlying API.

Yes, I know it likely breaks everybody's terms of service but at the same time I'm not loading gigabytes of ads, images, markup, to accomplish things.

If anyone is interested I can take some time and publish it this week.

  • bredren 20 hours ago

    I also do this. My primary use case is for reproducing page layout and styling at any given tree in the dom. So, capturing various states of a component etc.

    I also use it to automatically retrieve page responsiveness behavior in complex web apps. It uses playwright to adjust the width and monitor entire trees for exact changes which it writes structured data that includes the complete cascade of styles relevant with screenshots to support the snapshots.

    There are tools you can buy that let you do this kind of inspection manually, but they are designed for humans. So, lots of clickety-clackety and human speed results.

    ---

    My first reaction to seeing this FP was why are people still releasing MCPs? So far I've managed to completely avoid that hype loop and went straight to building custom CLIs even before skills were a thing.

    I think people are still not realizing the power and efficiency of direct access to things you want and skills to guide the AI in using the access effectively.

    Maybe I'm missing something in this particular use case?

    • ranyume 3 hours ago

      > My first reaction to seeing this FP was why are people still releasing MCPs?

      MCPs are more difficult to use. You need to use an agent to use the tools, can't do it manually easily. I wonder if some people see that friction as a feature.

    • AlphaSite 16 hours ago

      its mostly because MCPs handle auth in a standardised way and give you a framework you can layer things like auth, etc on top of.

      Without it youre stuck with the basic http firewall, etc which is extremely dangerous and this is maybe the 1 opportunity we have to do this.

      • re5i5tor 7 hours ago

        And people forget, Claude Code isn’t the only Claude surface, and CLIs don’t help in other surfaces other than Cowork.

  • halJordan 20 hours ago

    I love how HN is loving this idea when it's the exact same thing Anthropic and OpenAi (and every other llm maker) did.

    It's God's gift to them when it lets them bypass ads and dl copyrighted material. But it's Satan's curse on humanity when the Zuck does it to train his llm and dl copyrighted material.

    • joks 19 minutes ago

      I think there's a little bit of the Goomba fallacy at play here to be fair

    • deaux 19 hours ago

      Both scale and purpose make them completely different things. You're acting as if they're the same when they're not.

    • eipi10_hn 17 hours ago

      I won't comment about dl but ads are trackers and spyware for me. I don't spy on websites' owners, I have my human rights to stop those trackers.

      Zuck serves ads/spywares to other users, he deserves to taste his own medicines, not me.

    • coldtea 8 hours ago

      Yes, it's a god's gift when the average user can do it, and satan's curse what a hated fucking mega-corp is doing it.

      Where's the contradiction?

    • friendzis 12 hours ago

      You can see this pattern in many different topics: updoots are highly correlated with a positive answer to "do I personally get to profit"?

      • achierius 12 hours ago

        Yes, and? People need to eat. Billionaires are generally not interested in whether or not the average Joe gets to eat.

    • cyberax 15 hours ago

      I would love to pay for content. I'm _paying_ for YouTube Premium.

      But heck. Do I hate the YouTube interface, it degraded far past usability.

      • zx8080 15 hours ago

        Write to their support. Oh, wait.

    • tclancy 19 hours ago

      So you’re that Hal Jordan then? Why would a Green Lantern feel the need to defend either? I feel like the Guardians would not accept your arguments as soon as you got to Oa, poozer. I guess what I am saying is don’t have a famous name. Seems obvious.

      • llbbdd 18 hours ago

        OP appears to be talking about real life. What are you on about?

        • bryanrasmussen 17 hours ago

          the user name he is responding to is HalJordan, Hal Jordan is the name of a comic book superhero: Green Lantern, a moral paragon.

          on edit: he is evidently being "sarcastic"

    • miki123211 10 hours ago

      You conflate web crawling for inference with web crawling for training.

      Web crawling for training is when you ingest content on a mass scale, usually indiscriminately, usually with a dumb crawler for scale's sake, for the purposes of training an LLM. You don't really care whether one particular website is in the dataset (unless it's the size of Reddit), you just want a large, diverse, high-quality data mix.

      Web crawling for inference is when a user asks a targeted question, you do a web search, and fetch exactly those resources that are likely to be relevant to that search. Nothing ends up in the training data, it's just context enrichment.

      People have a much larger issue with crawling for training than for inference (though I personally think both are equally ok).

  • Axsuul a day ago

    Why even use Playwright for this? I feel like Claude just needs agent-browser and it can generate deterministic code from it.

    • dsrtslnd23 21 hours ago
      • dataviz1000 21 hours ago

        It is 2 months old!

        My excuse for not keeping up is that I'm in so deep that Claude Code can predict the stock market.

        I'll still publish mine and see if has any value but agent browser looks very complete.

        Thank you for sharing!

        • botanrice 3 hours ago

          I'm curious, have you developed your own reasoning system for how Claude can predict the stock market? Or have you trained it on past data combined with news sources?

        • Barbing 15 hours ago

          >I'm in so deep that Claude Code can predict the stock market.

          “What?”, more polite than “yeah right” :)

          (oh I guess obviously it would have a chance at nailing it for weeks in a row, and have more good years than bad—since actively managed funds can pull that off until, universally, they can’t [beat the market])

        • djsavvy 7 hours ago

          > Claude Code can predict the stock market.

          Please say more!

        • citizenpaul 16 hours ago

          >I'm in so deep that Claude Code can predict the stock market.

          What?

    • thefreeman 18 hours ago

      You can just start claude with the —chrome flag too and it will connect to the chrome extension.

  • swyx an hour ago

    yes please! i need a "comment to follow" functionality on HN

  • kolinko 9 hours ago

    Please do.

    Did you compare playwright with mcp? Why one over another?

    I use MCP usually, because I heard it’s less detectable than playwright, and more robust against design changes, but I didn’t compare/test myself

  • schainks 21 hours ago

    Very interested. Would even pay for an api for this. I am doing something similar with vibium and need something more token efficient.

    • hugs 15 hours ago

      have you tried vibium's cli + agent skill?

  • cbility 6 hours ago

    I use chrome devtools MCP to the same end - it works great for me. Interested in what advantages you see in using Playwright over chrome devtools?

  • defen a day ago

    Would this hypothetically be able to download arbitrary videos from youtube without the constant yt-dlp arms race?

    • dawnerd a day ago

      Don’t know how this could be more stable than ytdlp. When issues come up they’re fixed really quickly.

      • varenc 21 hours ago

        yt-dlp was very recently broken for ~2 days for any Youtube videos that required cookies: https://github.com/yt-dlp/yt-dlp/issues/16212

        Here is what actually fixed it: https://github.com/yt-dlp/ejs/pull/53/changes

        yt-dlp is relatively stable, but still occasionally breaks for long periods. I get the sense YouTube is becoming increasingly adversarial to yt-dlp as well.

        I don't know the details, but it doesn't seem like yt-dlp is running the entire YouTube JS+DOM environment. Something like a real headless browser seems like it would break less often, but be much heavier weight. And Youtube might have all sorts of other mitigations against this approach.

        • 22c 18 hours ago

          > yt-dlp is running the entire YouTube JS+DOM environment

          IIRC they maintain a minimal execution environment that is able to run just the JS needed to pass a few checks but this breaks too often enough that they're planning to make Node.js or another JS interpreter a hard requirement (possibly already happened).

          • defrost 17 hours ago

            Pretty much - yt-dlp currently requires Deno to "solve" youtube challenges.

            * https://deno.com/

            * there may well be other JS interpreters that are accepted, can be used - but solving JS challenges is required for much, if not all, YT content.

        • coro_1 18 hours ago

          > I get the sense YouTube is becoming increasingly adversarial to yt-dlp as well.

          I rarely use yt-dlp anymore.

          Before I just updated. Now when I do that, it usually becomes complex and full of questions.

        • toomuchtodo 21 hours ago

          I think having a hook to an LLM endpoint to enable yt-dlp to attempt to self resolve until an official fix is available would be a useful enhancement.

    • dataviz1000 a day ago

      > yt-dlp arms race

      I don't know anything about yt-dlp.

      It would probably help people who want to go to a concert and have a chance to beat the scalpers cornering the market on an event in 30 seconds hitting the marketplace services with 20,000 requests.

      I can try to see if can bypass yt-dlp. But that is always a cat and mouse game.

      • defen a day ago

        To clarify - yt-dlp is a command line tool for downloading youtube videos, but it's in a constant arms race with the youtube website because they are constantly changing things in a way that blocks yt-dlp.

        • dexterdog 19 hours ago

          I wouldn't call it an arms race. I don't update my client that often and I rarely have problems downloading any video with it.

    • phantomathkg 17 hours ago

      If it can save all the video/audio fragment and call ffmpeg to join them together. Maybe?

  • Johnny_Bonk 21 hours ago

    Yes, please do and ping me when it's done lol. Did you make it into an agent skill?

    • dataviz1000 21 hours ago

      Exactly, it is an agent skill that interacts pressing buttons and stuff with a webpage capturing and documenting all the API requests the page makes using Playwright's request / response interception methods. It creates and strongly typed well documented API at the end.

      • bengt 21 hours ago

        Sounds awesome. I've been using mitmproxy's --mode local to intercept with a separate skill to read flow files dumped from it, but interactive is even better.

  • miohtama 20 hours ago

    I just ask Claude to reverse engineer the site with Chrome MCP. It goes to work by itself, uses your Chrome logged in session cookies, etc.

  • zacmps 11 hours ago

    I would love it if you had time to publish it!

  • mikrl 21 hours ago

    I was doing similar by capturing XHR requests while clicking through manually, then asking codex to reverse engineer the API from the export.

    Never tried that level of autonomy though. How long is your iteration cycle?

    If I had to guess, mine was maybe 10-20 minutes over a few prompts.

  • rkagerer 16 hours ago

    I assume you're not logged into those sites, in order to avoid bans and the risk of hitting the wrong button like, say, "Delete Account".

    • dataviz1000 15 hours ago

      It turns any authenticated browser session into a fully typed REST API proxy — exposing discovered endpoints as local Hono routes that relay requests through the browser, so cookies and auth are automatic.

      The point is that it creates an API proxy in code that a Typescript server calls directly. The AI runs for about 10 minutes with codegen. The rest of the time it is just API calls to a service. Remove the endpoint for "Delete Account" and that API endpoint never gets called.

      This 100% breaks everyone's terms of service. I would not recommend nor encourage using.

  • 3abiton 13 hours ago

    I always used playwrite as an alternative to selenium, relatively surprised by its ability to interface with LLMs.

  • TimCTRL 10 hours ago

    +1, publish, but how will we know when you have published...

  • xrd a day ago

    Yes, please do!

    • dataviz1000 a day ago

      100% I'll response to this by Friday with link to Github.

      I use Patchright + Ghostery and I have a cleaver tool that uses web sockets to pass 1 second interval screenshots to the a dashboard and pointer / keyboard events to the server which allow interacting with websites so that a user can create authentication that is stored in the chrome user profile with all the cookies, history, local storage, ect.. in the cloud on a server.

      Can you list some websites that don't require subscription that you would like to me to test against? I used this for Robinhood and I think Linked in would be a good example for people to use.

      • botanrice 3 hours ago

        Would you be open to sharing your Github profile now so I could follow you? I don't check on here very often.

      • zzleeper a day ago

        Another +1, it would be incredibly useful to play with this approach! (and fun)

  • citizenpaul 16 hours ago

    Id like to see this published as well thx!

  • heystefan 20 hours ago

    Commenting to follow up.

  • retinaros 21 hours ago

    isnt it what everyone that needs web validation does?

paulirish a day ago

The DevTools MCP project just recently landed a standalone CLI: https://github.com/ChromeDevTools/chrome-devtools-mcp/blob/m...

Great news to all of us keenly aware of MCP's wild token costs. ;)

The CLI hasn't been announced yet (sorry guys!), but it is shipping in the latest v0.20.0 release. (Disclaimer: I used to work on the DevTools team. And I still do, too)

  • hank1931 18 hours ago

    Love the Mitch Hedberg reference! Thank you! Always good to get a little Mitch!

    ‘I don’t have a girlfriend. But I do know a woman who’d be mad at me for saying that.’

    ‘I’m against picketing, but I don’t know how to show it.’

    ‘I haven’t slept for ten days, because that would be too long.’

    ‘I like to play blackjack. I’m not addicted to gambling. I’m addicted to sitting in a semi-circle.’

    • paulirish 18 hours ago

      "I was going to get my teeth whitened but then I said, fuck that, I'll just get a tan instead."

  • abhikul0 2 hours ago

    It doesn't seem to work, tried the -u flag with the default address and it just couldn't connect to the existing chrome instance.

  • dreadnip 10 hours ago

    The big upside of the MCP is that it connects to already open browser windows. I tried the skill but it always tries to open new windows. Is there a way to get the `--autoConnect` behaviour with the CLI?

  • albert_e 17 hours ago

    Woah got this for the first time.

    > Too many requests

    You have exceeded a secondary rate limit.

    Please wait a few minutes before you try again; in some cases this may take up to an hour. Signing in may provide a higher rate limit if you are not already signed in.

    For more on scraping GitHub and how it may affect your rights, please review our Terms of Service.

  • commanderkeen08 a day ago

    MCPs cost nothing in CC now with Tool Search.

    • cheema33 21 hours ago

      > MCPs cost nothing in CC now with Tool Search.

      This is incorrect. Plenty of people have run the numbers. Tool search does not fix all problems with MCP.

      • ehsanu1 21 hours ago

        What are the numbers? Are there problems other than context usage you refer to?

      • lewisjoe 16 hours ago

        Can you elaborate more?

aadishv a day ago

Someone already made a great agent skill for this, which I'm using daily, and it's been very cool!

https://github.com/pasky/chrome-cdp-skill

For example, I use codex to manage a local music library, and it was able to use the skill to open a YT Music tab in my browser, search for each album, and get the URL to pass to yt-dlp.

Do note that it only works for Chrome browsers rn, so you have to edit the script to point to a different Chromium browser's binary (e.g. I use Helium) but it's simple enough

  • Etheryte a day ago

    On one hand, cool demo, on the other, this is horrifying in more ways than I can begin to describe. You're literally one prompt injection away from someone having unlimited access to all of your everything.

    • mh- a day ago

      Not the person you're replying to, but: I just use a separate, dedicated Chrome profile that isn't logged into anything except what I'm working on. Then I keep the persistence, but without commingling in a way that dramatically increases the risk.

      edit: upon rereading, I now realize the (different) prompt injection risk you were calling out re: the handoff to yt-dlp. Separate profiles won't save you from that, though there are other approaches.

      • sofixa a day ago

        Even without the bash escape risk (which can be mitigated with the various ways of only allowing yt-dlp to be executed), YT Music is a paid service gated behind a Google account, with associated payment method. Even just stealing the auth cookie is pretty serious in terms of damage it could do.

        • mh- a day ago

          Agreed. I wouldn't cut loose an agent that's at risk of prompt injection w/ unscoped access to my primary Google account.

          But if I understood the original commenter's use case, they're just searching YT Music to get the URL to a given song. This appears[0] to work fine without being logged in. So you could parameterize or wrap the call to yt-dlp and only have your cookie jar usable there.

          [0]: https://music.youtube.com/search?q=sandstorm

          [1]: https://music.youtube.com/watch?v=XjvkxXblpz8

          • sofixa a day ago

            Oh, that's true, even allows you to play without an account. I can swear that at some point it flat out refused any use unless you're logged in with an account that has YT Music (I remember having to go to regular YouTube to get the same song to send it to someone who didn't have it).

    • aadishv a day ago

      Of course I still watch it and have my finger on the escape key at all times :)

      • glenpierce a day ago

        I am in awe of the confidence you have in your reflexes.

        • aadishv a day ago

          You get used to it :) And especially once you get used to the YOLO lifestyle, you end up realizing that practically any form of security is entirely worthless when you're dealing with a 200 IQ brainwashed robot hacker.

          I think using the Pi coding agent really got me used to this way of thinking: https://mariozechner.at/posts/2025-11-30-pi-coding-agent/#to...

      • bergheim a day ago

        For now you are. All these things fall with time, of course. You will stop caring once you start feeling safe, we all do.

        Also. AAarrgh, my new thing to be annoyed at is AI drivel written slop.

        "No browser automation framework, no separate browser instance, no re-login."

        Oh really, nice. No separate computer either? No separate power station, no house, no star wars? No something else we didn't ask for? Just one a toggle and you go? Whoaaaaaa.

        Edit: lol even the skill itself is vibe coded:

        Lightweight Chrome DevTools Protocol CLI. Connects directly via WebSocket — no Puppeteer, works with 100+ tabs, instant connection.

        I feel like there's nothing fucking left on the internet anymore that is not some mean of whatever the LLM is trained to talk like now.

        • tacitusarc a day ago

          What can you do? I mentioned the use of AI on another thread, asking essentially the same question. The comment was flagged, presumably as off topic. Fair enough, I guess. But about 80% (maybe more) of posted blogs etc that I see on HN now have very obvious signs of AI. Comments do too. I hate it. If I want to see what Claude thinks I can ask it.

          HN is becoming close to unusable, and this isn’t like the previous times where people say it’s like reddit or something. It is inundated with bot spam, it just happens the bot spam is sufficiently engaging and well-written that it is really hard to address.

          • brabel 12 hours ago

            Could you just be paranoid about it and seeing things where they aren’t? I can’t imagine someone using AI to comment on HN!

          • bergheim a day ago

            I hear you and I agree. I don't know. Gated communities?

    • sheepscreek a day ago

      As long as it’s gated and not turned on by default, it’s all good. They could also add a warning/sanity check similar to “allow pasting” in the console.

      • hrmtst93837 a day ago

        Relying on warnings or opt-ins for something with this blast radius is security theater more than protection. The cleverest malware barely waits for you to click OK before making itself at home, so that checkbox is a speed bump on a highway.

        Chrome's 'allow pasting' gets ignored reflexively by most users anyway. If this agent can touch DevTools the attack surface expands far faster than most people realize or will ever audit.

  • esperent 18 hours ago

    > Most browser automation tools launch a fresh, isolated browser. This one connects to the Chrome you're already running

    Is this the same as what Claude in Chrome does?

    I tried that for a while and since I use Firefox and Chromium, the security problem of it seeing your tabs wasn't a big deal. Fresh Chrome install, only ever used for this exact purpose. Plus you can watch it working in real (actually very slow) time so if you did point it at something risky you can take over at any point.

    For actual testing of web apps though, a skill with playwright cli in headless mode is much more effective. About 1-2k context per interaction after a bit of tuning.

  • paulirish a day ago

    To be clear, this isn't a skill for the devtools mcp, but an independent project. It doesn't look bad, but obviously browser automation + agents is a very busy space with lots of parallel efforts.

    DevTools MCP and its new CLI are maintained by the team behind Chrome DevTools & Puppeteer and it certainly has a more comprehensive feature set. I'd expect it to be more reliable, but.. hey open source competition breeds innovation and I love that. :)

    (I used to work on the DevTools team. And I still do, too)

  • xmorse 21 hours ago

    Does anyone really use these hacked up with duct tape skills? why not use something more reliable like playwriter.dev?

  • Mashimo 11 hours ago

    Mhh, yt-dlp already has a build in youtube search, could you not use that instead of anything with AI?

mmaunder a day ago

Google is so far behind agentic cli coding. Gemini CLI is awful. So bad in fact that it’s clear none of their team use it. Also MCP is very obviously dead, as any of us doing heavy agentic coding know. Why permanently sacrifice that chunk of your context window when you can just use CLI tools which are also faster and more flexible and many are already trained in. Playwright with headless Chromium or headed chrome is what anyone serious is using and we get all the dev and inspection tools already. And it works perfectly. This only has appeal to those starting out and confused into thinking this is the way. The answer is almost never MCP.

  • zeroxfe 21 hours ago

    > Also MCP is very obviously dead, as any of us doing heavy agentic coding know.

    As someone that does heavy agentic coding (using basically all the tools), this is so far from the truth. People claiming this have probably never worked in large enterprise environments where things like authentication, RBAC, rate limiting, abuse detection, centralized management/updates/ops, etc. are a huge part of the development and deployment workflow.

    In these situations you can't just use skills and cli tools without a gigantic amount of retooling and increased operational and security complexity. MCP is really useful here, and allows centralized eng and ops teams to manage their services in a way that aligns with the organizations overall posture, policies, and infrastructure.

    > Google is so far behind agentic cli coding. Gemini CLI is awful.

    This part I totally agree. It's really hard to express how bad it is (and it's really disappointing.)

    • bloppe 18 hours ago

      > you can't just use skills and cli tools without a gigantic amount of retooling and increased operational and security complexity

      You're describing MCP. After all, MCP is just reinventing the OpenAPI wheel. You can just have a self-documenting REST API using OpenAPI. Put the spec in your context and your model knows how to use it. You can have all the RBAC and rate limiting and auth you want. Heck, you could even build all that complexity into a CLI tool if you want. MCP the protocol doesn't actually enable anything. And implementing an MCP server is exactly as complex as using any other established protocol if you're using all those features anyway

      • whattheheckheck 17 hours ago

        The clients for mcp can drop a url and http in the mcp.json and get access to the application. Can the client do that for every rest api?

        • bloppe 16 hours ago

          Ya, if you just use OpenAPI. That's why I'm saying MCP adds nothing. It's just another standard for documenting APIs. There are many that have been around for a long time and that are better integrated with existing ecosystems. There's also gRPC reflection. I'm sure there are others. LLMs can use them all equally effectively.

    • moritonal 20 hours ago

      Given MCP is supposed to just be a standardised format for self-describing APIs, why are all the features you listed MCP related things? It sounds more like it's forced the enterprise to build such features which cli tooling didn't have?

      • rsalus 20 hours ago

        mostly by virtue of being a common standard. MCP servers are primarily useful in a remote environment, where centralized management of cross-cutting concerns matters. also its really useful for integrating existing distributed services, e.g., internal data lakes.

        I think it's clear a self-describing CLI is optimal for local-first tooling and portability. I personally view remote MCP servers as complementary in the space.

      • tomnipotent 19 hours ago

        MCP's can hide most things behind an API.

  • IX-103 20 hours ago

    FYI: Gemini Cli is used internally at Google. It's actually more popular than Antigravity. Google uses MCP services internally for code search (since everything is in a mono-repo you don't want to waste time grepping billions of files), accessing docs and bugs, and also accessing project specific RAG databases for expertise grounding.

    Source - I know people at Google.

  • cheema33 21 hours ago

    > Also MCP is very obviously dead

    Some people will push back on this. They are holding out hope that the recent improvements Anthropic has made in this regard have improved the context rot problem with MCP. Anthropic's changes improve things a little. But it is akin to putting lipstick on a pig. It helps, but not much.

    The reason MCP is dying/dead is because MCP servers, once configured, bloat up context even when they are not being used. Why would anybody want that?

    Use agent skills. And say goodbye to MCP. We need to move on from MCP.

    • maxwellg 18 hours ago

      Is your agent harness dropping the entire MCP server tool description output directly into the context window? Is your agent harness always addig MCP servers to the context even when they are not being used?

      MCP is a wire format protocol between clients and servers. What ends up inside the context window is the agent builder's decision.

    • staticassertion 3 hours ago

      I'm a layman here. How is a skill any better? Aren't agent tools loaded on-demand, just as a skill would be? People are mentioning OpenAPI, but wouldn't you need to load the spec for that too?

    • ktoo_ 19 hours ago

      > it is akin to putting lipstick on a pig. It helps, but not much.

      The lipstick helps? This had me in stitches. Sorry for the non-additive reply. This is the funniest way I have seen this or any other phrase explained. By far. Honestly has made my day and set me up for the whole week.

    • dominotw 21 hours ago

      i am using notion mcp. is there a corresponding skill. also wtf is a plugin.

    • Rapzid 20 hours ago

      The bloat problem is already out dated though. People are having the LLM pick the MCP servers it needs for a particular task up front, or picking them out-of-band, so the full list doesn't exist in the context every call.

  • edwinjm 19 hours ago

    MCP is dead? Which cli tool should we use to instruct Chrome to open a page and click the Open button? And to read what appears in the console after clicking?

    MCP permanently sacrifice a chunk of the context window? And a skill for you cli is free?

  • rsalus a day ago

    MCP is very much not dead. centralized remote MCP servers are incredibly useful. also bespoke CLIs still require guidance for models to use effectively, so it's clear that token efficiency is still an issue regardless.

    • Torn 21 hours ago

      Tbh I find self-documenting CLIs (e.g. with a `--help` flag, and printing correct usage examples when LLMs make things up) plus a skill that's auto invoked to be pretty reliable. CLIs can do OAuth dances too just fine.

      MCP's remaining moats I think are:

      - No-install product integrations (just paste in mcp config into app)

      - Non-developer end users / no shell needed (no terminal)

      - Multi-tenant auth (many users, dynamic OAuth)

      - Security sandboxing (restrict what agents can do), credential sandboxing (agents never see secrets)

      - Compliance/audit (structured logs, schema enforcement)?

      If you're a developer building for developers though, CLI seems to be a clear winner right

      • quotemstr 21 hours ago

        Imagine if, in addition to local MCP "servers", the MCP people had nurtured a structured CLI-based --help-equivalent consumable by LLMs and shell completion engines alike. Doing so, you unify "CLI" (trivial deployment; human accessibility) and MCP-style (structured and discoverable tool calling) in a single DWIM artifact.

        But since when has this industry done the right thing informed by wisdom and hindsight?

        • debugnik 9 hours ago

          That structured CLI already exists: PowerShell cmdlets.

          • quotemstr 2 hours ago

            Not in any meaningful, general way on Unix systems it doesn't. Nobody uses psh outside Windows

            • debugnik an hour ago

              But nobody is using your hypothetical "structured CLI-based --help-equivalent consumable by LLMs and shell completion engines alike" either. In terms if mindshare, you're starting from scratch either way.

              I just remembered docopt, which maybe fits the bill in a more Unixy way, but it and its ports have become abandonware, for various reasons.

        • rsalus 20 hours ago

          that's a pretty interesting idea. It would be nice if there was such a standard. the approach I'm taking right now: a CLI that accepts structured JSON as input, with an 'mcp' subcommand that starts a stdio server. I bundle a 'help' command with a 'describe' action for self-service guidance scoped to a particular feature/tool.

    • abhis3798 21 hours ago

      I see remote MCP servers as a great interface to consume api responses. The idea that you essentially make your apis easily available to agents to bring in relevant context is a powerful one.

      When folks say MCP is dead, I don't get it. What other alternatives exist in place of MCP? Arbitrary code via curl/sdks to call a remote endpoint?

      • attentive 20 hours ago

        > What other alternatives exist in place of MCP? Arbitrary code via curl/sdks to call a remote endpoint?

        cli?

        for example aws cli. It's a full interface to aws API. Why would you need mcp for that?

        and if you have any doubts, agents use it with a great effect even without any relevant skill. "aws help" is fully discoverable.

        • rsalus 20 hours ago

          yes, but clis thus need self-service commands to provide guidance, and their responses need to be optimized for consumption by agents. in a sense, this is the same sort of context tax that MCP servers incur. so in my view cli and MCP are complementary tools; one is not strictly superior over the other.

          • cheema33 6 hours ago

            > yes, but clis thus need self-service commands to provide guidance, and their responses need to be optimized for consumption by agents.

            MCP vs Agent Skills:

            MCPs once configured cost you tokens even when they are not used. Unlike MCPs, skills use progressive disclosure. The AI agent does not load up the entire context, if the skill is not being used.

            MCPs will die off mostly for this reason alone.

    • mattnewton 21 hours ago

      I think cli’s are more token efficient- the help menu is loaded only when needed, and the output is trivially pipe able to grep or jq to filter out what the model actually wants

    • nojito a day ago

      all you need is a simple skills.md and maybe a couple examples and codex picks up my custom toolkit and uses it.

      • dominotw 21 hours ago

        whats your custom toolkit

        • nojito 17 hours ago

          I have dozens of clis that are custom built for codex to use.

  • sega_sai 21 hours ago

    I don't know if this just anecdotal random impression, but in a last week or two I had mostly good experience with Google cli. While previously I constantly complained about it. I have been using it together with codex, and I would not say that one is much better than another.

    It is hard to say nowadays, when things change so quickly

  • sunaookami 7 hours ago

    Gemini 3.1 Pro through Gemini CLI always tries to write files with cat instead of using the write_file tool, it's awful at tool use.

  • girvo a day ago

    I know it’s a bit of a tangent but man you’re right re. Gemini CLI. It’s woefully bad, barely works. Maybe because I was a “free” user trying it out at the time, but it was such a bad experience it turned me off subscribing to whatever their coding plan is called today.

    • ElCapitanMarkla 20 hours ago

      I had this exp too, but I trialed the pro sub a few weeks back and it has been great. I have no complaints this time

    • luckydata a day ago

      it's not the CLI, it's the model. The model wasn't trained to do that kind of work, was trained to do one shot coding, not sustained back and forth until it gets it right like Claude and ChatGPT.

  • hu3 19 hours ago

    > Also MCP is very obviously dead...

    Couldn't have been more wrong. MCP despite its manageable downsides is leagues ahead of anything else in many ways.

    The fact that SoTA models are trained to handle MCP should be hint enough to the observant.

    I probably build one MCP tool per week at work.

    And every project I work on gets its own MCP tool too. It's invaluable to have specialized per-project tooling instead of a bunch of heterogeneous scripts+glue+prayer.

    Anything specialized goes into an MCP.

  • synalx 15 hours ago

    Antigravity's coding agent is worlds apart from Gemini CLI, though.

  • danpalmer 20 hours ago

    > So bad in fact that it’s clear none of their team use it.

    I use it extensively, many of my colleagues do. I get a ton of value out of it. Some prefer Antigravity, but I prefer Gemini CLI. I get fairly long trajectories out of it, and some of my colleagues are getting day-long trajectories out of it. It has improved massively since I started using it when it first came out.

  • hugs 15 hours ago

    some serious people use vibium instead. (full-disclosure: "some serious people" is me.)

  • quotemstr 21 hours ago

    > Why permanently sacrifice that chunk of your context window when you can just use CLI tools which are also faster and more flexible and many are already trained in

    What about all the CLI tools not baked into the model's priors?

    Every time someone says "extensibility mechanism X is dead!", I think "Well, I guess that guy isn't doing anything that needs to extend the statistical average of 2010s-era Reddit"

boomskats a day ago

Been using this one for a while, mostly with codex on opencode. It's more reliable and token efficient than other devtools protocol MCPs i've tried.

Favourite unexpected use case for me was telling gemini to use it as a SVG editing repl, where it was able to produce some fantastic looking custom icons for me after 3-4 generate/refresh/screenshot iterations.

Also works very nicely with electron apps, both reverse engineering and extending.

cheema33 21 hours ago

How does this compare with playwright CLI?

https://github.com/microsoft/playwright-cli

  • Torn 21 hours ago

    I personally found playwright-cli, and agent-browser which wraps playwright, both more token-efficient than using the raw mcp.

    Odd that this article from Dec 2025 has been posted to the top of HN though

  • akvadrako 11 hours ago

    Its easier to connect to existing sessions in your main browser.

  • EGreg 21 hours ago

    It’s made by Google and comes with Chrome

LauraMedia 11 hours ago

Is this really the state of AI in 2026?

It takes over your entire browser to center a div... and then fails to do so?

re5i5tor 7 hours ago

Lots of MCP hate, and some love, in the comments.

80% of MCPs are thin wrappers over APIs . Yes they stink.

A well written remote OAuth MCP need not stink. Tons of advantages starting with strong security baked in.

I like Cloudflare Code Mode as an MCP pattern. Two tools, search and execute.

1M Opus 4.6 also reduces the penalties of MCP’s context approach. Along with tool search etc.

recroad 19 hours ago

I've been using TideWave[1] for the last few months and it has this built-in. It started off as an Elixir/LiveView thing but now they support popular JavaScript frameworks and RoR as well. For those who like this, check it out. It even takes it further and has access to the runtime of your app (not just the browser).

The agent basically is living inside your running app with access to databases, endpoints etc. It's awesome.

1. https://tidewave.ai/

  • galaxyLogic 19 hours ago

    Interesting. Does it only work with known frameworks like Next, React etc. or could I use it with my plain Node.js app which produces browser-output?

    • recroad 18 hours ago

      No, doesn't use work with server-side only apps.

      • galaxyLogic 11 hours ago

        It's a server-side app whose GUI is in the browser, a bit like Electron or what have you.

        I guess my question is does Tidewawe only work with a fixed set of known "frameworks" like React and Next, or is it a more general purpose tool for analysing an app based on its source-code and the HTML it produces for the browser?

guard402 15 hours ago

We tested this — the default take_snapshot path (Accessibility.getFullAXTree) is safe. It filters display:none elements because they're excluded from the accessibility tree.

But evaluate_script is the escape hatch. If an agent runs document.body.textContent instead of using the AX tree, hidden injections in display:none divs show up in the output. innerText is safe (respects CSS visibility), textContent is not (returns all text nodes regardless of styling).

The gap: the agent decides which extraction method to use, not the user. When the AX tree doesn't return enough text, a plausible next step is evaluate_script with textContent — which is even shown as an example in the docs.

Also worth noting: opacity:0 and font-size:0 bypass even the safe defaults. The AX tree includes those because the elements are technically 'rendered' and accessible to screen readers. display:none is just the most common hiding technique, not the only one.

nubsero 16 hours ago

I’ve been experimenting with a similar approach using Playwright, and the biggest takeaway for me was how much “hidden API” most modern websites actually have.

Once you start mapping interactions → network calls, a lot of UI complexity just disappears. It almost feels like the browser becomes a reverse-engineering tool for undocumented APIs.

That said, I do think there’s a tradeoff people don’t talk about enough:

- Sites change frequently, so these inferred APIs can be brittle - Auth/session handling gets messy fast - And of course, the ToS / ethical side is a gray area

Still, for personal automation or internal tooling, it’s insanely powerful. Way more efficient than driving full browser sessions for everything.

Curious how others are handling stability — are you just regenerating these mappings periodically, or building some abstraction layer on top?

Igor_Wiwi 7 hours ago

I can't make it run under WSL with Claude Code, anyone succeeded in this?

tonyhschu a day ago

Very cool. I do something like this but with Playwright. It used to be a real token hog though, and got expensive fast. So much so that I built a wrapper to dump results to disk first then let the agent query instead. https://uisnap.dev/

Will check this out to see if they’ve solved the token burn problem.

  • esperent 17 hours ago

    I use playwright CLI. Wrote a skill for it, and after a bit of tuning it's about 1-2k context per interaction which is fine. The key was that Claude only needs screenshots initially and then can query the dev tools for logs as needed.

  • mambodog 21 hours ago

    my workaround for this was to make a wrapper mcp server which uses claude haiku to summarize the page snapshot returned in the response of each playwright mcp call, and that has worked pretty well for me: https://github.com/jsdf/playwright-slim-mcp

danielraffel 15 hours ago

I asked Claude to use this with the new scheduled tasks /loop skill to update my Oscar picks site every five minutes during tonight’s awards show. It simply visited the Oscars' realtime feed via Chrome DevTools, and updated my picks and pushed to gh pages. It even handled the tie correctly.

https://danielraffel.me/2026/03/16/my-oscar-2026-picks/

I know I could just use claude --chrome, but I’m used to this excellent MCP server.

  • egeozcan 15 hours ago

    Very cool idea and site! I wish claude and others could parse video streams then you could even create your own feed.

  • dt3ft 12 hours ago

    Neat idea :)

jasonjmcghee 19 hours ago

I had fun playing with it + WebMCP this weekend, but I think, similarly to how claude code / codex + MCP require SKILL.md, websites might too.

We could put them in a dedicated tag:

    <script type="text/skill+markdown">
    ---
    name: ...
    description ...
    ---
    ...
    </script>
For all the skills with you want on the page, optionally set to default which "should be read in full to properly use the page".

And then add some javascript functions to wrap it / simplify required tokens.

Made a repo and a website if anyone is interested: https://webagentskills.dev/

hugs 15 hours ago

i wish more people knew or cared about web standards vs proprietary protocols. the webdriver bidi protocol took the good parts of cdp and made it a w3c standard, but no one knows about it. some of the people who do know about it, find one thing they don't like and give up. let's not keep giving megacorporations outsized influence and control over the web and the tools we use with it. let's celebrate standards and make them awesome.

babas03 17 hours ago

Great to see the standalone CLI shipping in alongside this! There’s been a lot of talk today about MCP 'context bloat,' but providing a direct bridge to active DevTools sessions is something a standard headless CLI can’t replicate easily. The ability to select an element in the Elements panel and immediately 'delegate' the fix to an agent is exactly the kind of hybrid workflow that makes DevTools so powerful.

vinmay 16 hours ago

For something like Chrome DevTools MCP with authenticated browser sessions, the specific risk is credentials in the browser context + any SEND capability reachable from the same entry points. If a page can inject a prompt that triggers a tool call, and that call path can also reach outbound network I/O, you have an exfiltration vector without needing shell access at all.

rossvc a day ago

I've been using the DevTools MCP for months now, but it's extremely token heavy. Is there an alternative that provides the same amount of detail when it comes to reading back network requests?

  • nerdsniper a day ago

    It's probably not fully optimized and could be compacted more with just some effort, and further with clever techniques, but browser state/session data will always use up a ton of tokens because it's a ton of data. There's not really a way around that. AI's have a surprising "intuition" about problems that often help them guess at solutions based on insufficient information (and they guess correctly more often than I expect they should). But when their intuition isn't enough and you need to feed them the real logs/data...it's always gonna use a bunch of tokens.

    This is one place where human intuition helps a ton today. If you can find the most relevant snippets and give the AI just the right context, it does a much better job.

  • mmaunder a day ago

    Yes. CLI. Always CLI. Never MCP. Ever. You’re welcome.

    • flash_us0101 15 hours ago

      CLI is great when you know what command to run. MCP is great when the agent decides what to run - it discovers tools without you scripting the interaction.

      The real problem isn't MCP vs CLI, it's that MCP originally loaded every tool definition into context upfront. A typical multi-server setup (GitHub, Slack, Sentry, Grafana, Splunk) consumes ~55K tokens in definitions before Claude does any work. Tool selection accuracy also degrades past 30-50 tools.

      Anthropic's Tool Search fixes this with per-tool lazy loading - tools are defined with defer_loading: true, Claude only sees a search index, and full schemas load on demand for the 3-5 tools actually needed. 85% token reduction. The original "everything upfront" design was wrong, but the protocol is catching up.

    • nerdsniper a day ago

      That doesn't solve the issue here because the amount of data in the browser state dwarfs the MCP overhead.

      • bartek_gdn 20 hours ago

        Can't we just iteratively inspect the network traces then? We don't need to consume the whole 2mb of data, maybe just dump the network trace and use jq to get the fields to keep the context minimal. I haven't added this in https://news.ycombinator.com/item?id=47207790 , but I feel it would be a good addition. Then prompt it with instructions to gradually discover the necessary data.

        But then I wonder, where the balance is between a bunch of small tool calls, vs one larger one.

        I recall some recent discussion here on hn on big data analysis

      • cheema33 21 hours ago

        > That doesn't solve the issue here because the amount of data in the browser state dwarfs the MCP overhead.

        The problem with MCP is that you are paying the price in token usage, even if you are not using the MCP server. Why would anybody want that?

        And no, the tool search function recently introduced by Anthropic does not completely solve this problem.

RALaBarge 20 hours ago

I made a websocket proxy + chrome extension to give control of the DOM to agents for my middleware app: https://github.com/RALaBarge/browserbox

The thing I am working on is improving at the moment agentic tool usage success rates for my research and I use this as a proxy to access everything with the cookies I allow in the session.

raw_anon_1111 a day ago

I don’t do any serious web development and haven’t for 25 years aside from recently vibe coding internal web admin portals for back end cloud + app dev projects. But I did recently have to implement a web crawler for a customer’s site for a RAG project using Chromium + Playwrite in a Docker container deployed to Lambda.

I ran the Docker container locally for testing. Could a web developer test using Claude + Chromium in a Docker container without using their real Chrome instance?

senand a day ago

I suggest to use https://github.com/simonw/rodney instead

  • meowface a day ago

    Unfortunately there are like a billion competitors to this right now (including Playwright MCP, Playwright CLI, the new baked-in Playwright feature in Codex /experimental, Claude Code for Chrome...) and I can never quite decide if or when I should try to switch. I'm still just using the ordinary Playwright MCP server in both Codex and Claude Code, for the time being.

speedgoose a day ago

Interesting. MCP APIs can be useful for humans too.

Chrome's dev tools already had an API [1], but perhaps the new MCP one is more user friendly, as one main requirement of MCP APIs is to be understood and used correctly by current gen AI agents.

[1]: https://chromedevtools.github.io/devtools-protocol/

yan5xu 11 hours ago

I built something in this space, bb-browser (https://github.com/epiral/bb-browser). Same CDP connection, but the approach is honestly kind of cheating.

Instead of giving agents browser primitives like snapshot, click, fill, I wrapped websites into CLI commands. It connects via CDP to a managed Chrome where you're already logged in, then runs small JS functions that call the site's own internal APIs. No headless browser, no stolen cookies, no API keys. Your browser is already the best place for fetch to happen. It has all the cookies, sessions, auth state. Traditional crawlers spend so much effort on login flows, CSRF tokens, CAPTCHAs, anti-bot detection... all of that just disappears when you fetch from inside the browser itself. Frontend engineers would probably hate me for this because it's really hard to defend against.

So instead of snapshot the DOM (easily 50K+ tokens), find element, click, snapshot again, parse... you just run

  bb-browser site twitter/feed
and get structured JSON back.

Here's the thing I keep thinking about though. Operating websites through raw CDP is a genuinely hard problem. A model needs to understand page structure, find the right elements, handle dynamic loading, deal with SPAs. That takes a SOTA model. But calling a CLI command? Any model can do that. So the SOTA model only needs to run once, to write the adapter. After that, even a small open-source model runs "bb-browser site reddit/hot" just fine.

And not everyone even needs to write adapters themselves. I created a community repo, bb-sites (https://github.com/epiral/bb-sites), where people freely contribute adapters for different websites. So in a sense, someone with just an open-source model can already feel the real impact of agents in their daily workflow. Agents shouldn't be a privilege only for people who can access SOTA models and afford the token costs.

There's a guide command baked in so if you do want to add a new site, you can tell your agent "turn this website into a CLI" and it reverse-engineers the site's APIs and writes the adapter.

v0.8.x dropped the Chrome extension entirely. Pure CDP, managed Chrome instance. "npm install -g bb-browser" and it works.

tomcasaburi 11 hours ago

imo a much better setup is using playwright-cli + some skill.md files for profiling (for example, I have a skill using aidenybai/react-scan for frontend react profiling). token efficient, fast and more customizable/upgradable based on your workflow. vercel-labs/agent-browser is also a good alternative.

anesxvito 21 hours ago

Been using MCP tooling heavily for a few months and browser debugging integration is one of those things that sounds gimmicky until you actually try it. The real question is whether it handles flaky async state reliably or just hallucinates what it thinks the DOM looks like?

glerk a day ago

Note that this is a mega token guzzler in case you’re paying for your own tokens!

oldeucryptoboi a day ago

I tell Claude to use playwright so I don't even need to do the setup myself.

  • nomilk a day ago

    Similarly, cursor has a built in browser and visit localhost to see the results in the browser. Although I don't use it much (I probably should).

pritesh1908 a day ago

I have been using Playwright for a fairly long time now. Do checkout

jedisct1 8 hours ago

For context extraction, Lightpanda is a really great option. Much faster than Chrome, and it comes with a built-in MCP server.

However, it will not fill forms, etc. But it can be combined with agent-browser to get the best of both worlds: https://swival.dev/pages/web-browsing.html

teaearlgraycold 20 hours ago

I love how in their demo video where they center an element it ends up off-center.

slrainka a day ago

chrome-cli with remote developer port has been working fine this entire time.

JKolios a day ago

Now that there's widespread direct connectivity between agents and browser sessions, are CAPTCHAs even relevant anymore?

wuxiaoxia88 11 hours ago

so good openclaw automation extensions, i like it.

wuxiaoxia88 11 hours ago

so good browser automation extensions. i like it

m00dy 17 hours ago

Connecting a remote VPS to a local Chrome session is usually a headache. It gets complicated when your Claw setup is on the server but the browser session stays on your own machine. I ended up using Proxybase’s relay [0] to bridge the gap, and it actually solved the connection issues for me.

[0] https://relay.proxybase.xyz

holoduke 20 hours ago

One tip for the illegal scrapers or automators out there. Casperjs and phanthomjs are still working very well for anti bot detection. These are very old libs no longer maintained. But I can even scrape and authenticate at my banks.

Yokohiii a day ago

Was already eye rolling about the headline. Then I realized it's from chrome.

Hoping from some good stories from open claw users that permanently run debug sessions.

cravo 6 hours ago

[dead]

jerrygoyal 16 hours ago

It's from 2025. The post should have a year tag.

  • tomhow 15 hours ago

    Done, thanks!

AlexDunit a day ago

[flagged]

  • David-Brug-Ai a day ago

    This is the exact problem that pushed me to build a security proxy for MCP tool calls. The permission model in most MCP setups is basically binary, either the agent can use the tool or it can't. There's nothing watching what it does with that access once its granted.

    The approach I landed on was a deterministic enforcement pipeline that sits between the agent and the MCP server, so every tool call gets checked for things like SSRF (DNS resolve + private IP blocking), credential leakage in outbound params, and path traversal, before the call hits the real server. No LLM in that path, just pattern matching and policy rules, so it adds single-digit ms overhead.

    The DevTools case is interesting because the attack surface is the page content itself. A crafted page could inject tool calls via prompt injection. Having the proxy there means even if the agent gets tricked, the exfiltration attempt gets caught at the egress layer.

Sonofg0tham a day ago

[flagged]

  • simianwords a day ago

    AI

    • rzmmm a day ago

      Yes. Can someone tell me why even HN has bots. For selling upvotes to advertisement purposes?

      • Sonofg0tham a day ago

        I'm not a bot and definitely not advertising - I'm new on HN and trying to contribute with a few comments where I can.

paseante 21 hours ago

[flagged]

  • raincole 20 hours ago

    The ultimate conflict of interest here is that the sites people want to crawl the most are the ones that want to be crawled by machines the least (e.g. Youtube). So people will end up emulating genuine human users one way or another.

  • maxaw 20 hours ago

    Fully agree. Will take some time though as immediate incentive not clear for consumer facing companies to do extra work to help ppl bypass website layer. But I think consumers will begin to demand it, once they experience it through their agent. Eg pizza company A exposes an api alongside website and pizza company B doesn’t, and consumer notices their agent is 10x+ faster interacting with company A and begins to question why.

  • codybontecou 21 hours ago

    Is this just a well-documented API?

  • ElectricalUnion 21 hours ago

    > interface designed for humans — the DOM.

    Citation needed.

    > The web already went through this evolution once: we went from screen-scraping HTML to structured APIs. Now we're regressing back to scraping because agents need to interact with sites that only have human interfaces.

    To me, sites that "only have human interfaces" are more likely that not be that way totally on purpose, attempting to maximize human retention/engagement and are more likely to require strict anti-bot measures like Proof-of-Work to be usable at all.

  • socalgal2 19 hours ago

    I feel like the fact tha HTML is end result is exactly why the Web is so successful. Yes, structured APIs sound great, until you realize the API owners will never give you the data you actually want via their APIs. This is why HTML has done so well. Why extensions exist. And why it's better for browser automation.

  • imiric 20 hours ago

    > What we actually need is a standard for websites to expose a machine-readable interaction layer alongside the human one.

    We had this 20 years ago with the Semantic Web movement, XHTML, and microformats. Sadly, it didn't pan out for various reasons, most of them non-technical. There's remnants of it today with RSS feeds, which is either unsupported or badly supported by most web sites.

    Once advertising became the dominant business model on the web, it wasn't in publishers' interest to provide a machine-readable format of their content. Adtech corporations took control of the web, and here we are. Nowadays even API access is tightly controlled (see Reddit, Twitter, etc.).

    So your idea will never pan out in practice. We'll have to continue to rely on hacks and scraping will continue to be a gray area. These new tools make automated scraping easier, for better or worse, but publishers will find new ways to mitigate it. And so it goes.

    Besides, if these new tools are "superintelligent", surely they're able to navigate a web site. Captchas are broken and bot detection algorithms (or "AI" themselves) are unreliable. So I'd say the leverage is on the consumer side, for now.

  • quotemstr 21 hours ago

    > expose a machine-readable interaction layer alongside the human one

    Which is called ARIA and has been a thing forever.