zlacker

[return to "Gemini 2.5 Pro Preview"]
1. herpdy+Y4[view] [source] 2025-05-06 15:36:34
>>meetpa+(OP)
I agree it's very good but the UI is still usually an unusable, scroll-jacking disaster. I've found it's best to let a chat sit for around a few minutes after it has finished printing the AI's output. Finding the `ms-code-block` element in dev tools and logging `$0.textContext` is reliable too.
◧◩
2. uh_uh+i6[view] [source] 2025-05-06 15:44:15
>>herpdy+Y4
Noticed this too. There's something funny about billion dollar models being handicapped by stuck buttons.
◧◩◪
3. energy+y9[view] [source] 2025-05-06 15:59:09
>>uh_uh+i6
The Gemini app has a number of severe bugs that impacts everyone who uses it, and those bugs have persisted for over 6 months.

There's something seriously dysfunctional and incompetent about the team that built that web app. What a way to waste the best LLM in the world.

◧◩◪◨
4. thebyt+zj1[view] [source] 2025-05-07 00:33:19
>>energy+y9
Like what? I use it daily and haven't come across any seriously dysfunctional or incompetent.
◧◩◪◨⬒
5. energy+rv1[view] [source] 2025-05-07 02:56:46
>>thebyt+zj1
Major:

1- Something went wrong error

2- Show Thinking never stops

3- You've been signed out error

4- UI spammed with garbled text if you attach large file

5- Prompt rejected with no error, prompt text returns to chat input but attachments are deleted

6- Pasting small amounts of text takes a few seconds in long chats

Annoying:

1- Scroll is hijacked when the prompt is accepted by server and thinking starts, instead of when you send the prompt or not at all.

---

If you haven't experienced these then I can only hazard a guess that you're keeping your chats at less than 100k token context or you're using AIStudio. The major issues happen when you push it with 90k token prompts or 200k token cumulative chats. They don't all have the same precise trigger, though, some are connected to long chats, others to big attachments, etc.

[go to top]