Get Paid as much as $20,000 for Finding ChatGPT Security Flaws

Photo: Boumen Japet (Shutterstock)

ChatGPT is perhaps the coolest tech on the block proper now, nevertheless it’s not immune from the problems all software program faces. Bugs and glitches have an effect on something working code, however the stakes are larger than a crummy consumer expertise: The fallacious bug might permit unhealthy actors to compromise the safety of OpenAI’s customers, which might be very harmful, seeing as the corporate hit 100 million energetic customers in January alone. OpenAI desires to do one thing about it, and it’ll pay you as much as $20,000 on your assist.

OpenAI’s Bug Bounty Program pays out huge time

OpenAI introduced its new Bug Bounty Program on Tuesday, April 11, inviting “security researchers, ethical hackers, and technology enthusiasts” to scrape via their merchandise (together with ChatGPT) for “vulnerabilities, bugs, or security flaws.” If you occur to search out such a flaw, OpenAI will reward you in money. Payouts vary based mostly on the severity of the difficulty you uncover, from $200 for “low-severity” findings to $20,000 for “exceptional discoveries.”

Bug bounty packages are literally fairly frequent. Companies throughout {the marketplace} provide them, outsourcing the work of looking for bugs to anybody who desires in. It’s a bit like beta testing an app: Sure, you’ll be able to have builders on the lookout for bugs and glitches, however counting on a restricted pool of customers will increase the possibilities of lacking necessary points.

With bug bounties, the stakes are even larger, as a result of firms are most keen on on the lookout for bugs that go away their software program—and, due to this fact, their customers—weak to safety threats.

How to enroll in OpenAI’s Bug Bounty Program

OpenAI’s Bug Bounty Program is in partnership with Bugcrowd, a corporation that helps crowdsource bug looking. You can join this system via Bugcrowd’s official web site, the place, as of this writing, 24 vulnerabilities have already been rewarded, and the typical payout has been $983.33.

OpenAI desires to make it clear, although, that mannequin questions of safety don’t apply to this program. If in testing certainly one of OpenAI’s merchandise, you discover the mannequin behaving in a method it shouldn’t, you must fill out the mannequin habits suggestions type, not undergo the Bug Bounty Program. For instance, you shouldn’t attempt to declare a reward if the mannequin tells you methods to do one thing unhealthy, or if it writes malicious code for you. In addition, hallucinations are additionally out the scope of the bounty.

OpenAI has an extended listing of in-scope points on its Bugcrowd web page, along with a fair longer listing of out-of-scope points. Make positive to learn the principles fastidiously earlier than submitting your bugs.

3actorsasbehaviorbetabitbugbountyprogramcashChatGPTcodecompaniescompetitionscomputersecuritycomputingcorecyberwarfaredatadevelopersdigitaltechnologyformglitchhow-toHuntingiintelligenceinternetsecurityitkinjalifehackerlinkmaxMediaMetaofferOpenAIRecordrunningSafetysecurityshutterstocksmartphonesoftwaresoftwarebugsoftwaretestingtargetTechnologytechnology2cinternetthatthetimetryTwittervulnerabilitiesvulnerabilitywhoWorkwritingyou