EkBass
Silver Coder
Hi folks.
I am surprised of the development of the smaller models in past two years. Current 13b and even 7b models can outrun DaVinci 3.0 thought OpenAI has also moved on with its products.
However, im using I7-7700 with 32gb ram (no GPU) and this client with 7b model runs smoothly. No need to pay about the tokens for OpenAI if you can tolerate few seconds of waiting time.
Somekind of GUI would be handy, i thought to use TKinter. However, i am not familiar with python and even making this to work took two evenings. If you like to add simple GUI for this, i would be more than happy about it.
Check out: GitHub - EkBass/console-chat-for-llama-2-7b-chat: Simple console program to chat locally with llama-2-7b-chat
I am surprised of the development of the smaller models in past two years. Current 13b and even 7b models can outrun DaVinci 3.0 thought OpenAI has also moved on with its products.
However, im using I7-7700 with 32gb ram (no GPU) and this client with 7b model runs smoothly. No need to pay about the tokens for OpenAI if you can tolerate few seconds of waiting time.
Somekind of GUI would be handy, i thought to use TKinter. However, i am not familiar with python and even making this to work took two evenings. If you like to add simple GUI for this, i would be more than happy about it.
Check out: GitHub - EkBass/console-chat-for-llama-2-7b-chat: Simple console program to chat locally with llama-2-7b-chat