zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. OhNoNo+sj1[view] [source] 2023-09-12 22:04:31
>>kcorbi+(OP)
just curious would it be possible to add a small network perhaps a books of study material like programming books. freeze the weights of the existing large network, and combined with the new network try to predict the book. The existing networks know language but not the content, the combined network will be trained on the content, and eventually toegther they score better, These "small" added networks might just be specific towards a certain topic (ea learn python or so). Then these small networks can be become modular. esesentially creating some kind of lora networks for LLM's.

Maybe start this way from the ground up, so you can get modular units, for health, finance, programming, education, writting assitance, phyloophy, ethics etc etc. If the modules can be changed, then one might be able to reduce their seize. Ea pick 2 or 3 chain them and one has a LLM for a specific area of interest. (reducing running cost)

◧◩
2. sandko+Xp1[view] [source] 2023-09-12 22:40:51
>>OhNoNo+sj1
This is part of what we're doing at Automorphic. Building shareable, stackable adapters that you can compose like lego bricks.
[go to top]