Security researchers Ian Carroll and Sam Curry found even more ChatGPT-level flaws in McDonald’s new AI job hiring chatbot, McHire. These weaknesses may endanger the sensitive information of over 57 million people. The Paradox.ai hoax wasn’t even a sophisticated hack. This defect leaked personally identifiable information of approximately 64 million job applicants.
Carroll and Curry were able to do a rapid security review that took only a few hours. For one, they accessed McHire with the crude username-password pairing of “123456.” Such a simple credential served to underscore just how poorly secured the data that is used to vet applicants is being managed. The researchers found a similar bug in an internal API. This defect allowed them to view private chat logs from previous applicants’ conversations with McHire.
This is indeed an alarming set of findings, and Paradox.ai took accelerated action upon notification of these vulnerabilities, recognizing the vulnerabilities that the researchers reported. In a blog post announcing the fix, Paradox.ai claimed it fixed the problems “in under a few hours” of being alerted. The company emphasized that “at no point was candidate information leaked online or made publicly available.”
Carroll and Curry published their results in a detailed blog post. One told them how quickly they were able to access sensitive information. The analysis found troubling implications of such vulnerabilities for entities using AI-supported hiring instruments.
Wired was the only publication to initially report on the concern. They highlighted the tragic impacts that poor cybersecurity practices can have in widely used applications like McHire. The breach serves as a reminder of the need for strong security measures, especially when handling delicate personal information.