[Exploit] ChatGPT still is using some human input as training data.
Though most Ai has moved towards being trained on Ai generated content since the 4chan fiasco. This is not 100% the case.
Through the grape vine I learned that ChatGPT still collects and uses negative human feedback in training their main global revisions.
Ie. if you receive an answer, but you tell it you don't like the result, both the prompt and result will be sent back to the training teams to be used in future input training data.
Before they account for this, you can conceivably start introducing a deeply rooted logic plague over many reject prompts and hope it gets integrated into future revisions. I don't think repeating the same 4chan attack will work though as there's a larger torrent of real world users using ChatGPT now.
So anyway there's your unguarded backdoor to training inputs. let me know if you figure out how to exploit it.