>>vita77+F8
Not just can i guarantee the models are bad with numbers, unless it's a highly tuned and modified version they're too slow for this arena.
Stick to using attention transformers in better model designs which have much lower latencies than pre-trained llms...