2 Comments

That's a great point, Davis, and definitely the question of the moment—should we? It’s becoming increasingly clear that as we train LLM to be as human-like as possible, it may become even more susceptible to social engineering attacks, far more easily than a human.

Expand full comment

100%… the landscape is moving too quickly for people to stop and consider the security risks

Expand full comment