ChocMontePy 2 minutes ago

I noticed last year that some archived pages are getting altered.

Every Reddit archived page used to have a Reddit username in the top right, but then it disappeared. "Fair enough," I thought. "They want to hide their Reddit username now."

The problem is, they did it retroactively too, removing the username from past captures.

You can see on old Reddit captures where the normal archived page has no username, but when you switch the tab to the Screenshot of the archive it is still there. The screenshot is the original capture and the username has now been removed for the normal webpage version.

When I noticed it, it seemed like such a minor change, but with these latest revelations, it doesn't seem so minor anymore.

celsoazevedo 2 hours ago

I don't see the point in doxing anyone, especially those providing a useful service for the average internet user. Just because you can put some info together, it doesn't mean you should.

With this said, I also disagree with turning everyone that uses archive[.]today into a botnet that DDoS sites. Changing the content of archived pages also raises questions about the authenticity of what we're reading.

The site behaves as if it was infected by some malware and the archived pages can't be trusted. I can see why Wikipedia made this decision.

  • jsheard 2 hours ago

    It's also kind of ironic that a site whose whole premise is to preserve pages forever, whether the people involved like it or not, is seeking to take down another site because they are involved and don't like it. Live by the sword, etc.

  • ddtaylor an hour ago

    Did they actually run the DDoS via a script or was this a case of inserting a link and many users clicked it? They are substantially different IMO

    • dunder_cat an hour ago

      https://news.ycombinator.com/item?id=46624740 has the earliest writeup that I know of. It was running it via a script and intentionally using cache busting techniques to try to increase load on the hosted wordpress infrastructure.

      • RobotToaster 16 minutes ago

        Given the site is hosted on wordpress.com, who don't charge for bandwidth, it seems to have been completely ineffective.

      • jsheard an hour ago

        > It was running

        It still is, uBlocks default lists are killing the script now but if it's allowed to load then it still tries to hammer the other blog.

        • dunder_cat an hour ago

          Ah good to know. My pi-hole actually was blocking the blog itself since the ublock site list made its way into one of the blocklists I use. But I've been just avoiding links as much as possible because I didn't want to contribute.

      • ddtaylor an hour ago

        Thank you this is exactly the information I was looking for.

        "You found the smoking gun!"

    • hexagonwin an hour ago

      they silently ran the DDoS script on their captcha page (which is frequently shown to visitors, even when simply viewing and not archiving a new page)

  • jMyles 2 hours ago

    > Changing the content of archived pages also raises questions about the authenticity of what we're reading.

    This is absolutely the buried lede of this whole saga, and needs to be the focus of conversation in the coming age.

basch an hour ago

It seems a lot of people havent heard of it, but I think its worth plugging https://perma.cc/ which is really the appropriate tool for something like Wikipedia to be using to archive pages.

mroe https://en.wikipedia.org/wiki/Perma.cc

  • ronsor an hour ago

    It costs money beyond 10 links, which means either a paid subscription or institutional affiliation. This is problematic for an encyclopedia anyone can edit, like Wikipedia.

  • jsheard an hour ago

    Does Wikipedia really need to outsource this? They already do basically everything else in-house, even running their own CDN on bare metal, I'm sure they could spin up an archiver which could be implicitly trusted. Bypassing paywalls would be playing with fire though.

    • RupertSalt an hour ago

      Hypothetically, any document, article, work, or object could be uniquely identified by an appropriate URI or URN, but in practice, http URLs are how editors cite external resources.

      The URLs proved to be less permanent than expected, and so the issue of "linkrot" was addressed, mostly at the Internet Archive, and then through wherever else could bypass paywalls and stash the content.

      All content hosted by the WMF project wikis is licensed Creative Commons or compatible licenses, with narrow exceptions for limited, well-documented Fair Use content.

    • toomuchtodo an hour ago

      Archive.org is the archiver, rotted links are replaced by Archive.org links with a bot.

      https://meta.wikimedia.org/wiki/InternetArchiveBot

      https://github.com/internetarchive/internetarchivebot

      • jsheard an hour ago

        Yeah for historical links it makes sense to fall back on IAs existing archives, but going forward Wikipedia could take their own snapshots of cited pages and substitute them in if/when the original rots. It would be more reliable than hoping IA grabbed it.

        • toomuchtodo an hour ago

          Not opposed, Wikimedia tech folks are very accessible in my experience, ask them to make a GET or POST to https://web.archive.org/save whenever a link is added via the Wiki editing mechanism. Easy peasy. Example CLI tools are https://github.com/palewire/savepagenow and https://github.com/akamhy/waybackpy

          Shortcut is to consume the Wikimedia changelog firehose and make these http requests yourself, performing a CDX lookup request to see if a recent snapshot was already taken before issuing a capture request (to be polite to the capture worker queue).

          • Gander5739 24 minutes ago

            This already happens. Every link added to Wikipedia is automatically archived on the wayback machine.

          • ferngodfather 41 minutes ago

            Why wouldn't Wikipedia just capture and host this themselves? Surely it makes more sense to DIY than to rely on a third party.

            • huslage 8 minutes ago

              Why would they need to own the archive at all? The archive.org infrastructure is built to do this work already. It's outside of WMF's remit to internally archive all of the data it has links to.

          • jsheard an hour ago

            I didn't know you can just ask IA to grab a page before their crawler gets to it. In that case yeah it would make sense for Wikipedia to ping them automatically.

          • RupertSalt an hour ago

            Spammers and pirates just got super excited at that plan!

            • toomuchtodo an hour ago

              There are various systems in place to defend against them, I recommend against this, poor form against a public good is not welcome.

xurukefi an hour ago

Kinda off-topic, but has anyone figured out how archive.today manages to bypass paywalls so reliably? I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous. I figured that they have found an (automated) way to imitate Googlebot really well.

  • jsheard 7 minutes ago

    > I figured that they have found an (automated) way to imitate Googlebot really well.

    It's not possible to imitate Googlebot well enough to fool a site (or WAF) which knows what it's doing, because the canonical way to verify Googlebot is a DNS lookup dance which will only ever succeed if the request comes from one of Googlebots dedicated IP addresses. Same with Bingbot and all the others.

  • Aurornis 28 minutes ago

    > I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous.

    The curious part is that they allow web scraping arbitrary pages on demand. So if a publisher could put in a lot of arbitrary requests to archive their own pages and see them all coming from a single account or small subset of accounts.

    I hope they haven't been stealing cookies from actual users through a botnet or something.

    • xurukefi 21 minutes ago

      Exactly. If I was an admin of a popular news website I would try to archive some articles and look at the access logs in the backend. This cannot be too hard to figure out.

  • elzbardico 38 minutes ago

    > which is, of course, ridiculous.

    Why? in the world of web scrapping this is pretty common.

    • xurukefi 23 minutes ago

      Because it works too reliably. Imagine what that would entail. Managing thousands of accounts. You would need to ensure to strip the account details form archived peages perfectly. Every time the website changes its code even slightly you are at risk of losing one of your accounts. It would constantly break and would be an absolute nightmare to maintain. I've personally never encountered such a failure on a paywalled news article. archive.today managed to give me a non-paywalled clean version every single time.

      Maybe they use accounts for some special sites. But there is definetly some automated generic magic happening that manages to bypass paywalls of news outlets. Probably something Googlebot related, because those websites usually give Google their news pages without a paywall, probably for SEO reasons.

  • tonymet 39 minutes ago

    I’m an outsider with experience building crawlers. You can get pretty far with residential proxies and browser fingerprint optimization. Most of the b-tier publishers use RBC and heuristics that can be “worked around” with moderate effort.

    • quietsegfault 20 minutes ago

      .. but what about subscription only, paywalled sources?

  • layer8 15 minutes ago

    It’s not reliable, in the sense that there are many paywalled sites that it’s unable to archive.

    • xurukefi 10 minutes ago

      But it is reliable in the sense that if it works for a site, then it usually never fails.

bjourne 18 minutes ago

FYI, archive.today is NOT the Internet Archive/Wayback Machine.

casey2 4 minutes ago

Anecdotally I generally see archive.is/archive.today links floating around "stochastic terrorist" sites and other hate cults.

rdiddly 25 minutes ago

So toward the end of last year, the FBI was after archive.today, presumably either for keeping track of things the current administration doesn't want tracked, or maybe for the paywall thing (on behalf of rich donors/IP owners). https://gizmodo.com/the-fbi-is-trying-to-unmask-the-registra...

That effort appears to have gone nowhere, so now suddenly archive.today commits reputational suicide? I don't suppose someone could look deeper into this please?

chrisjj 3 hours ago

> an analysis of existing links has shown that most of its uses can be replaced.

Oh? Do tell!

  • that_lurker an hour ago

    I would be suprised if archive.today had something that was not in the wayback machine

    • chrisjj an hour ago

      Archive.today has just about everything the archived site doesn't want archived. Archive.org doesn't, because it lets sites delete archives.

    • bombcar an hour ago

      Wayback machine removes archives upon request, so there’s definitely stuff they don’t make publicly available (they may still have it).

    • zahlman 40 minutes ago

      Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.

    • ribosometronome an hour ago

      Accounts to bypass paywalls? The audacity to do it?

      • that_lurker an hour ago

        Oh yeah those where a thing. As a public organization they can't really do that.

        I personally just don't use websites that paywall important information.

  • nobody9999 3 hours ago

    >> an analysis of existing links has shown that most of its uses can be replaced.

    >Oh? Do tell!

    They do. In the very next paragraph in fact:

       The guidance says editors can remove Archive.today links when the original 
       source is still online and has identical content; replace the archive link so 
       it points to a different archive site, like the Internet Archive, 
       Ghostarchive, or Megalodon; or “change the original source to something that 
       doesn’t need an archive (e.g., a source that was printed on paper)
    • chrisjj 3 hours ago

      Well, that's an odd idea of "can be replaced".

      > editors can remove Archive.today links when the original source is still online and has identical content

      Hopeless. Just begs for alteration.

      > a different archive site, like the Internet Archive,

      Hopeless. It allows archive tampering by the page's own JS and archive deletion by the domain owner.

      > Ghostarchive, or Megalodon

      Hopeless. Coverage is insignificant.

      • Kim_Bruning 2 hours ago

        > archive.today

        Hopeless. Caught tampering the archive.

        The whole situation is not great.

      • nobody9999 2 hours ago

        I just quoted the very next paragraph after the sentence you quoted and asked for clarification.

        I did so. You're welcome.

        As for the rest, take it up with Jimmy Wiles, not me.

mrguyorama 3 hours ago

>In emails sent to Patokallio after the DDoS began, “Nora” from Archive.today threatened to create a public association between Patokallio’s name and AI porn and to create a gay dating app with Patokallio’s name.

Oh good. That's definitely a reasonable thing to do or think.

The raw sociopathy of some people. Getting doxxed isn't good, but this response is unhinged.

  • oytis 14 minutes ago

    I mean, the admin of archive.today might face a jail time if deanonymised, kind of understandable he's nervous. Meanwhile for Patokallio it's just curiosity and clicks

  • jMyles an hour ago

    It's a reminder how fragile and tenuous are the connections between our browser/client outlays, our societal perceptions of online norms, and our laws.

    We live at a moment where it's trivially easy to frame possession of an unsavory (or even illegal) number on another person's storage media, without that person even realizing (and possibly, with some WebRTC craftiness and social engineering, even get them to pass on the taboo payload to others).

  • ouhamouch 2 hours ago

    That was private negotiations, btw, not public statements.

    In response to J.P's blog already framed AT as project grown from a carding forum + pushed his speculations onto ArsTechnica, whose parent company just destroyed 12ft and is on to a new victim. The story is full of untold conflicts of interests covered with soap opera around DDoS.

    • MBCook an hour ago

      Why does it matter it was a private communications?

      It’s still a threat isn’t it?

    • Yossarrian22 2 hours ago

      Can you elaborate on your point?

      • ouhamouch 2 hours ago

        The fight is not about where it is shown and not about what, not about "links in Wikipedia", but about whether News Inc will be able to kill AT, as they did with 12FT.

        • Yossarrian22 an hour ago

          What is News Inc? Are they a funder of Wikipedia(I think Wikipedia didn’t have a parent company so they’re not owners)?

          • ouhamouch an hour ago

            They are owner of ArsTechnica which wrote 3rd (or 4th?) article on AT in a row painting it in certain colors.

            The article about FBI subpoena that pulled J.P's speculations out of the closet was also in ArsTechnica and by the same author, and that same article explicitly mentioned how they are happy with 12ft down

            • Yossarrian22 27 minutes ago

              … Ars is owned by Conde Nast?

paganel an hour ago

At this point Archive.today provides a better service (all things considered) compared to Wikipedia, at least when it comes to current affairs.

anilakar 30 minutes ago

> If you want to pretend this never happened – delete your old article and post the new one you have promised. And I will not write “an OSINT investigation” on your Nazi grandfather

From hero to a Kremlin troll in five seconds.

alsetmusic 3 hours ago

I will no longer donate to Wikipedia as long as this is policy.

  • jraph 3 hours ago

    Why? The decision seems reasonable at first sight.

    • chrisjj 2 hours ago

      Second sight is advisable in such cases. Fact is, archives are essential to WP integrity and there's no credible alternative to this one.

      I see WP is not proposing to run its own.

      • huslage 4 minutes ago

        What exactly is credible about archive.today if they are willing to change the archive to meet some desire of the leadership? That's not credible in the least.

      • mook 2 hours ago

        Wouldn't it be precisely because archives are important that using something known to modify the contents would be avoided?

        • esseph 2 hours ago

          > something known to modify the contents would be avoided?

          Like Wikipedia?

        • chrisjj 2 hours ago

          Obviously not, since archive.org is encouraged.

      • that_lurker an hour ago

        The operators() of archive.today (and the other domains) are doing shadey things and the links are not working so why keep the site around as for example Internet archives waybackmachine works as alternative to it.

        • chrisjj an hour ago

          What archive.today links are not working?

          > Internet archives wayback machine works as alternative to it.

          It is appalling insecure. It lets archives be altered by page JS and deleted by the page domain owner.

      • throw0101a an hour ago

        > Fact is, archives are essential to WP integrity and there's no credible alternative to this one.

        Yes, they are essentional, and that was the main reason for not blacklisting Archive.today. But Archive.today has shown they do not actually provide such a service:

        > “If this is true it essentially forces our hand, archive.today would have to go,” another editor replied. “The argument for allowing it has been verifiability, but that of course rests upon the fact the archives are accurate, and the counter to people saying the website cannot be trusted for that has been that there is no record of archived websites themselves being tampered with. If that is no longer the case then the stated reason for the website being reliable for accurate snapshots of sources would no longer be valid.”

        How can you trust that the page that Archive.today serves you is an actual archive at this point?

        • chrisjj an hour ago

          > If ... If ...

          Oh dear.

          > How can you trust that the page that Archive.today serves you is an actual archive at this point?

          Because no-one shown evidence that it isn't.

          • rufo 13 minutes ago

            The quote uses ifs because it was written before this was verified, but the Wikipedia thread in question has links to evidence of tampering occurring.

      • Jordan-117 an hour ago

        Did you not read the article? They not only directed a DDOS against a blogger who crossed them, but altered their own archived snapshots to amplify a smear against them. That completely destroys their trustworthiness and credibility as a source of truth.

  • Larrikin an hour ago

    About how much had you previously donated over the years?

tl2do an hour ago

Why not show both? Wikipedia could display archive links alongside original sources, clearly labeled so readers know which is which. This preserves access when originals disappear while keeping the primary source as the main reference.

  • bawolff an hour ago

    The objection is to this specific archieve service not archiving in general.

  • ranger207 an hour ago

    They generally do. Random example, citation 349 on the page of George Washington: ""A Brief History of GW"[link]. GW Libraries. Archived[link] from the original on September 14, 2019. Retrieved August 19, 2019."

    • Gander5739 22 minutes ago

      This will always be done unless the original url is marked as dead or similar.

shevy-java an hour ago

Anyone has a short summary as to who and why Archive.today acted via DDos? Isn't that something done by malicious actors? Or did others misuse Archive.today?

  • zeroonetwothree an hour ago

    If you read the linked article it is discussed