zlacker

[parent] [thread] 2 comments
1. bsenft+(OP)[view] [source] 2024-05-15 11:58:11
I'm at a law firm, this is in use with attorneys to great success. And no, none of them are so dumb they do not verify the LLM's outputs.
replies(1): >>mrtran+in
2. mrtran+in[view] [source] 2024-05-15 14:09:23
>>bsenft+(OP)
How can no one see what we have today? You only need six instances of an LLM running at the same time, with a system to coordinate between them, and then you have to verify the results manually anyway. Sign me up!
replies(1): >>datame+Xv
◧◩
3. datame+Xv[view] [source] [discussion] 2024-05-15 14:50:40
>>mrtran+in
If a certain percent of the work is completed through research synthesis and multiple perspective alignment, why is said novel approach not worth lauding?

I've created a version of one of the resume GPTs that analyses my resume's fit to a position when fed the job description along with a lookup of said company. I then have a streamlined manner in which it points out what needs to be further highlighted or omitted in my resume. It then helps me craft a cover letter based on a template I put together. Should I stop using it just because I can't feed it 50 job roles and have it automatically select which ones to apply to and then create all necessary changes to documents and then apply?

[go to top]