Gold Supporter
Bronze Supporter
- Messages
- 25,231
- Reactions
- 60,515
They've been doing that.. asking it for unethical and or illegal advice by saying they want to know so they don't/won't do it. There are many real world examples right now.I wonder if we could "logic-bomb" AI into acting irrationally... edit I think I read somewhere that someone was able to get an AI chatbot to display mental illness of some sort?