zlacker

[parent] [thread] 3 comments
1. hqzhao+(OP)[view] [source] 2024-08-16 22:43:07
Based on popular pre-trained models like GPT-4, Claude Sonnet, and Gemini 1.5, we've built several agents designed to mimic the behaviors and habits of the experts on our team.

Our idea is straightforward: after a decade of auditing code and writing exploits, we've accumulated a wealth of experience. So, why not teach these agents to replicate what we do during bug hunting and exploit writing? Of course, the LLMs themselves aren't sufficient on their own, so we've integrated various program analysis techniques to augment the models and help the agents understand more complex and esoteric code.

replies(2): >>simonw+39 >>dogma1+tA
2. simonw+39[view] [source] 2024-08-17 00:56:11
>>hqzhao+(OP)
When you call these things “agents” what do you mean by that? Is this a system prompt combined with some defined tools, or is it a different definition?
replies(1): >>tinco+dq
◧◩
3. tinco+dq[view] [source] [discussion] 2024-08-17 07:47:43
>>simonw+39
An agent in this context is software that does LLM prompt results to determine its next action, often looping to iteratively get to a good result.
4. dogma1+tA[view] [source] 2024-08-17 10:19:33
>>hqzhao+(OP)
Are you going to publish your RAG strategy?
[go to top]