xlii a day ago

Misleading title. AI chatbot hallucinated discount codes that weren't accepted but scammy customer decided to push on it.

  • chmod775 a day ago

    There's no difference between you advertising something on your website vs. the chatbot that is on your website advertising something. It's something "the company" said either way.

    There's generally protections in many jurisdictions against having to honor contracts that are based on obvious errors that should have been obvious to the other party however ("too good to be true"), and other protections against various kinds of fraud - which may also apply here, since this was clearly not done in good faith.

    If you have an AI chatbot on your website, I highly recommend communicating to the user clearly that nothing it says constitutes an offer, contract, etc, whatever it may say after. As a company you could be in a legally binding contracts merely if someone could reasonably believe they entered into a contract with you. Claiming that it was a mistake or that your employee/chatbot messed up may not help. Do not bury the disclaimer in some fine-print either.

    Or just remove the chatbot. Generally they mainly piss people off rather than being useful.

    • nebezb 19 hours ago

      https://www.cbc.ca/news/canada/british-columbia/air-canada-c...

      A disclaimer is, in my opinion, not enough.

      • hshdhdhj4444 17 hours ago

        As it shouldn’t be.

        Will the company go out of their way to do right by customers who were led to disadvantageous positions due to the chat bot?

        Almost certainly not. So the disclaimer basically ends up becoming a one way get out of jail for free card, which is not what disclaimers are supposed to be.

    • powera a day ago

      There's a difference between the chatbot "advertising" something and an hour-long manipulative conversation getting the chatbot to make up a fake discount code. Based on the OP's comments, if it was a human employee who gave the fake code they could plausibly claim duress.

      • acdha 18 hours ago

        Think about if this happened in the real world. Like if I ran a book store, I’d expect some scammer to try to schmooze a discount but I’d also expect the staff to say no, refuse service, and call the police if they refused to leave. If the manager eventually said “okay, we’ll give you a discount” ultimately they would likely personally be on the hook for breaking company policy and taking a loss, but I wouldn’t be able to say that my employee didn’t represent my company when that’s their job.

        Replacing the employee with a rental robot doesn’t change that: the business is expected to handle training and recover losses due to not following that training under their rental contract. If the robot can’t be trained and the manufacturer won’t indemnify the user for losses, then it’s simply not fit for purpose.

        This is the fundamental problem blocking adoption of LLMs in many areas: they can’t reason and prompt injection is an unsolved problem. Until there are some theoretical breakthroughs, they’re unsafe to put into adversarial contexts where their output isn’t closely reviewed by a human who can be held accountable. Companies might be able to avoid paying damages in court if a chatbot is very clearly labeled as not not to be trusted, but that’s most of the market because companies want to lay off customer service reps. There’s very little demand for purely entertainment chatbots, especially since even there you have reputational risks if someone can get it to make a racist joke or something similarly offensive.

      • szszrk 21 hours ago

        If having "an hour-long manipulative conversation" was possible, we have proof that company placed an unsupervised, error prone mechanism instead of real support.

        If that "difference" is so obvious to you (and you expect it will break at some point), why don't you demand the company to notice that problem as well? And simply.. not put bogus mechanism in place, at all.

        Edit: to be clear. I think company should just cancel and apologize. And then take down that bot, or put better safeguards (good luck with that).

      • hshdhdhj4444 17 hours ago

        Umm… The human could have dropped off the conversation? Or escalated it to a manager?

  • estimator7292 16 hours ago

    If I go to your website and see a big banner with a promo code, you are obligated to honor it.

    If you walk into any retail store in the US, the price on the shelf is legally binding. If you forgot to update the shelf tag, too bad, you are now obligated to sell at the old price.

    If you advertise a price or discount, you are required to honor such. Advertising fictitious prices or discounts is an illegal scam.

    Likewise, if you have some text generator on your site that gives out prices and promo codes, that's your problem. A customer insisting you honor that is not a scammer, they are exercising their legal right to demand you honor your own obligations to sell products at the price you advertised.

    So, this is a scammy business trying to get out of their legal obligations to a customer who is completely in the right.

    Lesson: don't put random text machines in your marketing pipeline in a way that they can write checks your ass can't cash.

Archit3ch 13 hours ago

Sounds like the bot didn't give the customer the discount, as it wasn't authorised to approve discounts?

Yeah, this should be properly communicated.