A downloadable app for Windows, macOS, and Linux

Welcome to HammerAI Desktop, the AI character chat you've been looking for! HammerAI Desktop is a desktop app that uses llama.cpp and ollama  to run AI chat models locally on your computer, without logging in.

Some key features:

  • No configuration needed - download the app, download a model (from within the app), and you're ready to chat 
  • Works offline
  • Free (unlimited chats with any character)
  • Supports MacOS - Apple Silicon (M1 / M2), Windows, and Ubuntu
  • No sign in needed
  • NSFW content allowed - we have uncensored models which can be used for roleplay
  • Private - your chat is only stored as long as you have the chat window within the app open
  • Automatic detection and use of your GPU
  • Support for V1 and V2 character card imports
  • Support for many different LLM's - today you can chat with OpenHermes-2.5-Mistral-7B, Luna-AI-Llama2-Uncensored, Toppy-M-7BNous-Hermes-Llama-2-7B, Llama-2-7B-Chat, and Llama-2-13B-chat.
  • Story mode (where you write a story with AI) and character chat mode (where you chat with a specific character).

You can check out https://www.hammerai.com/desktop for more information - have fun chatting!

Download

Download
Windows
External
Download
MacOS - Apple Silicon (M1 / M2)
External
Download
MacOS - Intel (x86)
External
Download
Ubuntu
External

Install instructions

You can download directly from this page or from our website: https://www.hammerai.com/desktop. Just install, choose a model to download, and start chatting!

Development log

View all posts

Comments

Log in with itch.io to leave a comment.

Viewing most recent comments 1 to 40 of 44 · Next page · Last page

Didn't get no response but my RAM and disk went both 100% and after a while I lost 20GB of my C disk space and i cannot figure out where from since the hammer ai folder is only 3 GB. How is that cause I downloaded a model in D disk?

Sorry! We moved over to Discord to chat, tl;dr for others is to check C:\Users\<yourname>\AppData\Roaming\HammerAI and C:\Users\<yourname>\AppData\Local\HammerAI

I can't install last version, any help?

Hi, sorry! I need more information to help you though 😂 I.e. what exactly can't install? Would also be great if you want to join the Discord so we can chat more quickly!

I fixed it, don't worry. I had to install a uninstaller because the previous ver was giving those error

Hello, when they will update again, I am waiting for an update that adds image generation and that is faster, since I have a problem, when I send the first message in the chat it freezes the screen and closes the App, I do not have a good processor but to use an AI it must be sufficient

Hi! We've started on image generation, though no guarantee on timing. Regarding freezing, can you use Ollama by itself (https://ollama.com/download)? If not, then the issue is just that you need a more powerful computer :( If so, would be awesome if you want to join the Discord where I can help debug and get it working for you!

Wow, I like this service you respond quickly, let me see how to solve the problem and I get into the discord

It's weird that I cannot download the default model or other even though I downloaded the software in my computer (;へ:)  and I can't even download the model in the web version (ಥ_ಥ)

I cannot add image here (I don't know why), if you need screenshots, please send me an email

( • ̀ω•́ )✧  my email: nihilismnil@163.com

Hi, sorry, that is unexpected! Could you join the Discord and we can chat there?

I'm sorry I can't (;д;) I can't open the discord in my country (I think it's because of the firebreak in my country.)

But I have an account in github, can we use that? ヾ(•ω•`。)

Shoot! Well if you want to try the new beta, maybe it will work for you? It's a totally new way of using the LLM which should have WAY fewer errors! You can download it directly here: https://github.com/hammer-ai/hammerai/releases/tag/v0.0.97

Sorry for not replying to you for so long.ヘ(;´Д`ヘ)

I spent a lot of times downloading the <HammerAI-0.0.97.Setup>, I can open it but it just shows "Installation has failed: There was an error while installing the application." When I want to open and check the setup log, it just disappeares (;д;).

I think I'm destined to be unusing your software (╥╯^╰╥), but I still appreciate what you have done for me and I really really hope you will succeed in the future!*\(๑• ₃ •๑)

Hope you have a wonderful day. (Sorry for my bad english, I'm from CN and I use the translator. (;へ:) 

Hi, so sorry for this! But I have good news for you - I just rolled out a totally new LLM set up which is way more stable and should work on (almost) any computer. If you want to give it one more try I think it might work hopefully :) You can download it directly here: https://www.hammerai.com/desktop

(+2)

probably wouldn't have bothered downloading it if i knew you had to subscribe for 9 bucks a month to be able to keep conversations. They just delete every time you leave the conversation.

(+1)(-3)

Sorry about that :( If it helps you feel better, I use all the money I make to pay other devs to help work on the project with me. It was just getting too much for me to manage all alone with everything going on in my life. A top requested feature which I plan to add is that everyone can always have one conversation saved for free.

(1 edit) (+1)

question will there be image generation in the future and if so will it be paid or free (what id do is have 2 versions a lesser quality one that is free and a higher quality one that is paid) also will there be a feature to save the chat on your device

Hi! No immediate plans for image generation, but definitely watching the space. If I see a nice library to help with this that is cross-platform I will add it. Not sure on pricing yet, but will definitely let you know! 

For saving chats, we have that on Desktop currently! And are working on bringing it to Web as well. And when we roll out Mobile we'll have it.

Sometimes when I'm chatting, the answers starts to generate slower and then it just stops generating. Is there anyway to fix that? My GPU is not the best, so maybe that's it?

Mmm that could definitely be part of it, but the other is that our logic for long chats will lead to many tokens that need to get re-evaluated. Have been looking into fixing this, but no work around yet, sorry about that. One thing that helps is to use characters with less text in them (i.e. their personality is shorter). Will let you know when we have a better fix for this, sorry again!

What should I do if I have an endless loading model?

Hi sorry! That is a known bug. The immediate fix is to turn off GPU acceleration under Settings. If that doesn't work, could you join the Discord so we can chat more and try to fix it for you? 

Hi, I released a new update which uses Ollama under the hood and is much more stable! Would love to hear if it works for you now. Sorry again.

(+1)

nice, thank god theres no membership/credits ass thing. need to blow off steam if u catch my drift ;)

Will there be an android version?

(+1)

Hi, yes! We are currently working on a mobile app. No timeline yet, but will let you know when it's ready to test.

I get this message, and the AI don't do anything.
"Server started without GPU acceleration using no AVX support."

Hi, that means we weren't able to detect your GPU correctly and so started using just CPU. We'll be rolling out updates in the future which should help here, sorry about that!

Hello, I bought the pro plan for a  month and cancelled it right after I got the license (because with subscriptions I know that I'll forget to cancel at the end of the month).
Now money is gone from my bank and it says that my pro license is cancelled. Can you help me out? 

Best regards,
Poro

(+1)

Oh, so sorry, that looks like a bug where we stop accepting licenses if the subscription is cancelled. Please DM me and I'll refund you for your order while fixing this. If you want to buy it again so it works for the month I'll also refund that (i.e. it will be free for you until I fix this).

All good, I just want the license to work for the month :)
So far I like the program a lot.
Where do I DM you?

Discord is best! You can see the link to join at the top here: https://www.hammerai.com/

(+1)

Hello, this is a good tool, I wonder if you are going to translate it to Spanish or it is very difficult, I am not very good in English so I can't make the most use of it, this message is passed by translator

Hi, that's a great idea! I will look into translating the app, I had not thought too much about it previously.

(6 edits)

Hi, I really like this tool and I would like to have the "Chats saving" feature because it's a must have! However I can't really afford that and I think the price is a little expensive (108€ a year with taxes) and I would have preferred a one time payment choice (even if a little more expensive). So I really hesitate...

1. If one day I decide to buy the pro, how can I be sure that the servers where my characters and their profile images are stored will not shut down one day (for one reason or another). (Because If I understand correctly, even locally stored character share their profile picture and description to the servers, no?)

2. This  error can happen sometimes (from my logs):

[2024-04-14 19:15:36.242] [error] [llama-cpp-server@generateResponse] Error TypeError: terminated

Then, the character no longer responds, I have to reload it. If I get the "Chats saving" feature, I wonder if the conversation will be able to resume after this kind of error.

That's a lot of question I know, thanks and sorry for my imperfect English!

(+2)

Hi, glad you like it! Yeah I'm thinking a lot about a lifetime license because others have asked for it. What would you be able to pay for it? I'm just not quite sure how to price it.

For your questions:

1. When you save characters locally the description and images are all saved on your computer, so it will work forever! You can click here to see exactly what we save and where it is saved:

2. Ah, that is a bug with longer conversations. I don't yet have a fix, but I have also seen it and will try to fix. Sorry about that.

Anyways, thank you for the feedback and support. I will let you know when I fix this bug, and no worries if you do not want to pay for the pro version, I understand it is a lot of money.

Deleted 235 days ago
(20 edits)

Thanks for your clear answer! :)

I don't know about the price, but between 60 and 110 euro/dollar for a lifetime license could be a good compromise between your need for programmer funds and the limited means of certain users and also the limited size of chats even if saved (see bugs below). Fyi: I just purchased a short time licence today to test the PRO features and I like it! But I am a little bit uncomfortable knowing that there is automatic bank withdrawal each end of cycle. So, yes a lifetime license could be great (as long as there is no kind of Pro + version in the future) :p

TWO  BUGS:

1. The dialogue generation can sometimes stop (especially on long sentences), it's not a really big deal because you just have to try again but it's annoying.

2. Sometimes during a totally normal conversation (long but not always), the character start rambling by mixing verbs and words from some old (or not) messages. IE: "Oh no, maybe it's then yes I said it I said, said  it!, but now we are here to, i can it! I think it's alright too!" or unfinished smaller strange one like "(giggling*". After some verifications, it look like it's a problem with  exceded tokens limitation. It  could be great to have the possibility to set it to 16384 or even 32768, because conversation are short (even with 8192 size). :/

Thank you! Okay that is very useful.

Ah, I see, I have been asked about a "continue generating feature", so that is on the roadmap. And yes, sorry about the bug with the rambling, that's known and I need to fix it still.

Also, just to offer, I have a 100% refund policy if you're unhappy with the Pro features, as the goal is not to make money, it's just to fund continued development. So if you're not happy just email / DM me your details on Discord and I will refund you. Would rather not make the money instead of making people unhappy.

(1 edit)

No no, it's ok, thanks for your concern. However I would be interrested to join the discord, (I gave my profile name when ordering but I don't have invitation). :)

(+1)

Ah sorry! I'm not sure how to directly invite people to the Discord. Instead I just assign people to a role who have purchased. If you  join using the link here I'll then add you to the role! https://www.hammerai.com/

(3 edits) (+2)

I get having to make money to fund development, but I feel that saying that "We will never charge for access to features" is a bit of a lie now that pro is a thing.


(+2)

That is true, I'm sorry about that  😭. I should have qualified that statement when I made it, though I did really believe it at the time. The issue is that there are a lot of stability improvements and features to make, and I have a lot going on in my life. So my goal with collecting money is to be able to pay a developer to come work on the project. I'm sorry though, you're right that I went back on my word.

Deleted 207 days ago

Hi, thank you for the kind words! What model do you want? I can add some other bigger ones.

Deleted 207 days ago

Thank you 😁

Will there be an Option do save the Chats, so you dont have to start over everytime you use the a Model?

Yes that is the next feature I will add, definitely top of mind!

(1 edit)

Perfect ^^.

I really enjoy this Program, even if i have slight Issues on some Parts, but thats to be expected, i have no clue which Model i should use and which my Pc can handle xD.

maybe you have a Idea how to fix that the Bot starts looping the same Message and gets kind of "stuck" 

(2 edits)

Thank you! Yeah I've been talking to people in Discord about the looping. Right now my suggestion is to just use a bigger model or try prompting differently. Sorry about that!

(1 edit)

Hi itt66, saving chats is now supported!

Yeah, i saw it. But sadly i wont be able to use it ^^°

I really dont like subscribtions, i would rather do a one time Payment.

But it is how it is, i still like the Programm.

Hi, i've downloaded the app, given it permission to access the network, but it is stuck with "loading model"... Am i missing something please, do i need to configure or download something else?

Hi! Could you come and ask in Discord? Then we can debug more easily. Thanks!

It run quite well on Windows. Nice work :) you have develop the sw alone ? Any github repo avaiable ?

(+2)

Thank you! Yes I develop it alone, though I am looking for a co-founder, preferably one with React Native experience who wants to bring this to mobile (and add some paid features, so we could make money there). No GitHub today, I have thought about open source but haven't decided to do it yet.

Does anyone know how to regenerate/refresh a single message? I hate having to reload the entire chat, just because one message was bad.

Sorry, no way to do that currently. But that's a nice feature to add and is possible, I will look into that when I next have time!

(+1)

Thanks for adding a local option. This app is great.

(+1)

Glad you like it!

Does it generate images or it's only text?

(1 edit) (+1)

Only text! But I have heard people ask for images, maybe some day.

Why is the AI in this feel like it's robotic, and not really flowing, and learning like all limited memory AI? Literally I can say hi to someone and it literally says a set dialogue every time.

(3 edits) (+1)

Hey! It can really depend on the model you're using, which are you using and have you tried a few? And have you tried modifying the temperature (try increasing it)? That may help make it more creative feeling:

If those don't help it may be the character. If the character prompt length (i.e. personality + scenario + example dialogs + first message) is too long there may not be enough context length left. So trying on some different characters may help things.

PS. If you want to join the Discord we can chat back and forth more easily, would love to help you get this working better.

(+1)

This looks amazing! Thank you guys! I am curious, though, are you guys intending to integrate image generation into the chats? Such as scenes in a story while roleplaying?

(+1)

Yeah that would definitely be awesome. I have been keeping my eye on it in case there is an approach I should take. Want to join the Discord and we can chat more about this?

(+2)

i am now back home. hammer time 😈

(+1)

OH YEAH  🎉

(+2)

HOLY SHIT IT ON WINDOWS... oh wait shit i am on vacation rn and my high end gaming PC is at home. dam.

(+3)

Hi all, excited to share that Windows and Ubuntu are out in beta! 

You can download them from Itch directly or on https://www.hammerai.com/desktop. If you have any issues or suggestions, please feel free to join the Discord and let me know there: https://discord.gg/kXuK7m7aa9

Enjoy!

(+1)

so excited to finally be able to try it out

(+2)

Good luck with Windows Release!
Also cant wait to try it

Thanks, it's out now!

I've recently been interested in setting up llamas on windows, using KoboldAI and Silly Tavern as the medium to hook into it. The main thing I found was that fully offloading the model on to your VRAM is an insane boost in performance and the sort of 'end goal' you want to have with loading models. Having high RAM is a bit of a red herring since the ram speeds are meh in comparison. Even if it's just 1 layer you'll feel the difference.

For 7b models, you'd need at minimum 6gb vram. And even then, you'd probably need a model of about 3gb large because not all 7b models are born equal.

So is this something your software will do automatically? In terms of figuring out context sizes, layers, blasbatchsizes, etc, to determine the optimal loadout for the best speeds? Are different quantasized versions of models available depending on VRAM availability?

(+1)

Hey! That's a good question and seems like something we should support, though right now we just have some hard-coded presets. Happy to chat more about it in our Discord if you're interested: https://discord.gg/kXuK7m7aa9

Is 16Go of RAM enough ?

Mm short answer is that I don't know. But is this for Web or Desktop? If Desktop, what computer are you using? 

Windows, Web, Opera gx

Got it. So I would say try it and see? But Windows Desktop is now also out so maybe that will work?

(+1)

Got this shit tagged for when the windows or android release comes out. I'd love to use an A.I program for my story boarding without having to pay 9.99 to get more than like 3 messages. Will actually donate too if it does.

(+1)

Thanks for your support, I actually recently made some progress on getting the build working. No date yet, but  I also really want a Windows build and am very sorry for the delay :(

No problem bud. This stuff is a lot of work, and I appreciate the effort you're putting in. Good luck!

Thanks! Finally got it out if you want to try :)

Sounds good my dude.

I know I'm the 10,000th person to ask, but when is it coming to windows?

(2 edits) (+1)

I know I feel quite bad, everyone is waiting :(  It is still my #1 priority, but I've had some issues with getting the libraries to compile (specifically linking them into HammerAI), and things in life have gotten crazy. I will have it out as soon as I can though, sorry everyone.

In case anyone is reading this and is interested, I'd be happy to bring on any contributors! We will keep desktop and web free forever, but I was thinking we could build a mobile version with some paid features and then split profits between contributors.

Okay well it took two months beyond that, but it's out now!

when are we going to get a windows release?

(1 edit)

It is my top priority right now! But have run into some issues with MLC-LLM that are taking more time to get working than I originally thought. The app itself runs fine on Windows though, just not yet the AI chat part. If you join the Discord I plan to post in there as soon as it's ready for early testers.

I guess the real answer is.. today!

(+2)

Following this with interest for a Windows release.

I love the fact you included a screenshot of someone instantly going for the "can I kiss you" with their waifu, you know your target demographics that's for sure.

(+1)

You found the easter egg 😂

(+1)

Windows is now out!

(+1)

oh and um, why doesn't not work with windows (for now?). please explain to me, i would like to know

I just haven't yet finished up the work to support it! There is no technical reason why it can't work on Windows, and I definitely want Windows support as soon as possible.

It's working now!

(+1)

dam one of the few moments where apple users have something cool that window users don't

True haha. But Windows is out now, so it's even again xD

(1 edit)

This seems awesome, so i wanted to try the web version, whenever i tried a character, it wanted to download the AI model but it displayed the message "Cannot find adapter that matches the request", is there a way for me to manually download the model or maybe i forgot to enable something on my browser? (I use Google Chrome)

Hmm, that is unexpected. 

It looks like these issues: https://github.com/mlc-ai/web-llm/issues/105#issuecomment-1594835134 & https://github.com/mlc-ai/web-llm/issues/128#issuecomment-1595151465, which are "likely because your env do not support the right GPU requested(due to older mac) or browser version" or "likely mean that you do not have a device that have enough GPU RAM".  

Could you try going to https://webgpureport.org/ and seeing what is says?

Also if you'd like to join the Discord it would be great to continue this conversation there: https://discord.gg/kXuK7m7aa9

(+1)

I think it really was a problem with my Google Chrome, cause i tried with Microsoft Edge (Good god...) and it seems to be working normally, i'll have to check what's wrong with it later, but in case you'd like some info, my GPU is an Nvidia GeForce RTX 3060 TI and my Chrome version was 117.0.5938.63 (it says it's the most recent), which what i read online, was supposed to support WebGPU

(+1)

Yada yada, same as all the other comments. Looks awesome.

(+1)

Today is the day...

(+2)

let me know when this is on windows

Will do!

(+1)

let me know when this is on windows (1) 

It finally happened :)

We're out now!

(1 edit) (+2)

i always loved playing AI dungeon back in the day, i'm looking foward for this to come to windows!!!

(+1)

We will let you know!

Today is the day!

Viewing most recent comments 1 to 40 of 44 · Next page · Last page