That's a great point, Davis, and definitely the question of the moment—should we? It’s becoming increasingly clear that as we train LLM to be as human-like as possible, it may become even more susceptible to social engineering attacks, far more easily than a human.
That's a great point, Davis, and definitely the question of the moment—should we? It’s becoming increasingly clear that as we train LLM to be as human-like as possible, it may become even more susceptible to social engineering attacks, far more easily than a human.
100%… the landscape is moving too quickly for people to stop and consider the security risks