A downloadable app for Windows, macOS, and Linux

Download NowName your own price

Welcome to HammerAI Desktop, the AI character chat you've been looking for! HammerAI Desktop is a desktop app that uses llama.cpp to run AI chat models locally on your computer.

Some key features:

  • No configuration needed - download the app, download a model (from within the app), and you're ready to chat 
  • Works offline
  • Free
  • Supports MacOS - Apple Silicon (M1 / M2), Windows, and Ubuntu
  • No sign in needed
  • NSFW content allowed - we have uncensored models which can be used for roleplay
  • Private - your chat is only stored as long as you have the chat window within the app open
  • Automatic detection and use of your GPU
  • Support for V1 and V2 character card imports
  • Support for many different LLM's - today you can chat with OpenHermes-2.5-Mistral-7B, Luna-AI-Llama2-Uncensored, Toppy-M-7BNous-Hermes-Llama-2-7B, Llama-2-7B-Chat, and Llama-2-13B-chat.

You can check out https://www.hammerai.com/desktop for more information - have fun chatting!

StatusIn development
PlatformsWindows, macOS, Linux
Rating
Rated 4.0 out of 5 stars
(12 total ratings)
AuthorHammerAI
GenreInteractive Fiction, Role Playing, Visual Novel
TagsAdult, Anime, artificial-intelligence, Erotic, LGBT, Romance, Singleplayer
Average sessionAbout a half-hour
LanguagesEnglish
InputsKeyboard, Mouse
AccessibilityHigh-contrast
LinksHomepage

Download

Download NowName your own price

Click download now to get access to the following files:

Windows
External
MacOS - Apple Silicon (M1 / M2)
External
MacOS - Intel (x86)
External
Ubuntu
External

Development log

View all posts

Comments

Log in with itch.io to leave a comment.

Hello, I bought the pro plan for a  month and cancelled it right after I got the license (because with subscriptions I know that I'll forget to cancel at the end of the month).
Now money is gone from my bank and it says that my pro license is cancelled. Can you help me out? 

Best regards,
Poro

Hello, this is a good tool, I wonder if you are going to translate it to Spanish or it is very difficult, I am not very good in English so I can't make the most use of it, this message is passed by translator

Hi, that's a great idea! I will look into translating the app, I had not thought too much about it previously.

(6 edits)

Hi, I really like this tool and I would like to have the "Chats saving" feature because it's a must have! However I can't really afford that and I think the price is a little expensive (108€ a year with taxes) and I would have preferred a one time payment choice (even if a little more expensive). So I really hesitate...

1. If one day I decide to buy the pro, how can I be sure that the servers where my characters and their profile images are stored will not shut down one day (for one reason or another). (Because If I understand correctly, even locally stored character share their profile picture and description to the servers, no?)

2. This  error can happen sometimes (from my logs):

[2024-04-14 19:15:36.242] [error] [llama-cpp-server@generateResponse] Error TypeError: terminated

Then, the character no longer responds, I have to reload it. If I get the "Chats saving" feature, I wonder if the conversation will be able to resume after this kind of error.

That's a lot of question I know, thanks and sorry for my imperfect English!

(+1)

Hi, glad you like it! Yeah I'm thinking a lot about a lifetime license because others have asked for it. What would you be able to pay for it? I'm just not quite sure how to price it.

For your questions:

1. When you save characters locally the description and images are all saved on your computer, so it will work forever! You can click here to see exactly what we save and where it is saved:

2. Ah, that is a bug with longer conversations. I don't yet have a fix, but I have also seen it and will try to fix. Sorry about that.

Anyways, thank you for the feedback and support. I will let you know when I fix this bug, and no worries if you do not want to pay for the pro version, I understand it is a lot of money.

Deleted 9 days ago
(20 edits)

Thanks for your clear answer! :)

I don't know about the price, but between 60 and 110 euro/dollar for a lifetime license could be a good compromise between your need for programmer funds and the limited means of certain users and also the limited size of chats even if saved (see bugs below). Fyi: I just purchased a short time licence today to test the PRO features and I like it! But I am a little bit uncomfortable knowing that there is automatic bank withdrawal each end of cycle. So, yes a lifetime license could be great (as long as there is no kind of Pro + version in the future) :p

TWO  BUGS:

1. The dialogue generation can sometimes stop (especially on long sentences), it's not a really big deal because you just have to try again but it's annoying.

2. Sometimes during a totally normal conversation (long but not always), the character start rambling by mixing verbs and words from some old (or not) messages. IE: "Oh no, maybe it's then yes I said it I said, said  it!, but now we are here to, i can it! I think it's alright too!" or unfinished smaller strange one like "(giggling*". After some verifications, it look like it's a problem with  exceded tokens limitation. It  could be great to have the possibility to set it to 16384 or even 32768, because conversation are short (even with 8192 size). :/

Thank you! Okay that is very useful.

Ah, I see, I have been asked about a "continue generating feature", so that is on the roadmap. And yes, sorry about the bug with the rambling, that's known and I need to fix it still.

Also, just to offer, I have a 100% refund policy if you're unhappy with the Pro features, as the goal is not to make money, it's just to fund continued development. So if you're not happy just email / DM me your details on Discord and I will refund you. Would rather not make the money instead of making people unhappy.

(1 edit)

No no, it's ok, thanks for your concern. However I would be interrested to join the discord, (I gave my profile name when ordering but I don't have invitation). :)

(3 edits)

I get having to make money to fund development, but I feel that saying that "We will never charge for access to features" is a bit of a lie now that pro is a thing.


(+1)

That is true, I'm sorry about that  😭. I should have qualified that statement when I made it, though I did really believe it at the time. The issue is that there are a lot of stability improvements and features to make, and I have a lot going on in my life. So my goal with collecting money is to be able to pay a developer to come work on the project. I'm sorry though, you're right that I went back on my word.

Been playing around with it, it's really good! I'd say it's about as good as parsing its character information as something like Character.AI is, and I'm sure I'll have lots of fun creating characters in it.

However, I do wonder when larger databases will be integrated. 7 Gigs of Pygmalion leaves your characters still surprisingly limited and robotic, and just a bit TOO dumb. 

Other than that, utterly fantastic work. It's amazing to see this technology made more easily accessible with the right equipment.

Hi, thank you for the kind words! What model do you want? I can add some other bigger ones.

As it turns out, I missed the options on the various versions of each model, I'll need to play around with those, I think.

This is an ace piece of software, friend! Great work.

Thank you 😁

Will there be an Option do save the Chats, so you dont have to start over everytime you use the a Model?

Yes that is the next feature I will add, definitely top of mind!

(1 edit)

Perfect ^^.

I really enjoy this Program, even if i have slight Issues on some Parts, but thats to be expected, i have no clue which Model i should use and which my Pc can handle xD.

maybe you have a Idea how to fix that the Bot starts looping the same Message and gets kind of "stuck" 

(2 edits)

Thank you! Yeah I've been talking to people in Discord about the looping. Right now my suggestion is to just use a bigger model or try prompting differently. Sorry about that!

(1 edit)

Hi itt66, saving chats is now supported!

Yeah, i saw it. But sadly i wont be able to use it ^^°

I really dont like subscribtions, i would rather do a one time Payment.

But it is how it is, i still like the Programm.

Hi, i've downloaded the app, given it permission to access the network, but it is stuck with "loading model"... Am i missing something please, do i need to configure or download something else?

Hi! Could you come and ask in Discord? Then we can debug more easily. Thanks!

It run quite well on Windows. Nice work :) you have develop the sw alone ? Any github repo avaiable ?

(+2)

Thank you! Yes I develop it alone, though I am looking for a co-founder, preferably one with React Native experience who wants to bring this to mobile (and add some paid features, so we could make money there). No GitHub today, I have thought about open source but haven't decided to do it yet.

Does anyone know how to regenerate/refresh a single message? I hate having to reload the entire chat, just because one message was bad.

Sorry, no way to do that currently. But that's a nice feature to add and is possible, I will look into that when I next have time!

(+1)

Thanks for adding a local option. This app is great.

(+1)

Glad you like it!

Does it generate images or it's only text?

(1 edit) (+1)

Only text! But I have heard people ask for images, maybe some day.

Why is the AI in this feel like it's robotic, and not really flowing, and learning like all limited memory AI? Literally I can say hi to someone and it literally says a set dialogue every time.

(3 edits) (+1)

Hey! It can really depend on the model you're using, which are you using and have you tried a few? And have you tried modifying the temperature (try increasing it)? That may help make it more creative feeling:

If those don't help it may be the character. If the character prompt length (i.e. personality + scenario + example dialogs + first message) is too long there may not be enough context length left. So trying on some different characters may help things.

PS. If you want to join the Discord we can chat back and forth more easily, would love to help you get this working better.

(+1)

This looks amazing! Thank you guys! I am curious, though, are you guys intending to integrate image generation into the chats? Such as scenes in a story while roleplaying?

(+1)

Yeah that would definitely be awesome. I have been keeping my eye on it in case there is an approach I should take. Want to join the Discord and we can chat more about this?

(+2)

i am now back home. hammer time 😈

(+1)

OH YEAH  🎉

(+2)

HOLY SHIT IT ON WINDOWS... oh wait shit i am on vacation rn and my high end gaming PC is at home. dam.

(+3)

Hi all, excited to share that Windows and Ubuntu are out in beta! 

You can download them from Itch directly or on https://www.hammerai.com/desktop. If you have any issues or suggestions, please feel free to join the Discord and let me know there: https://discord.gg/kXuK7m7aa9

Enjoy!

(+1)

so excited to finally be able to try it out

(+2)

Good luck with Windows Release!
Also cant wait to try it

Thanks, it's out now!

I've recently been interested in setting up llamas on windows, using KoboldAI and Silly Tavern as the medium to hook into it. The main thing I found was that fully offloading the model on to your VRAM is an insane boost in performance and the sort of 'end goal' you want to have with loading models. Having high RAM is a bit of a red herring since the ram speeds are meh in comparison. Even if it's just 1 layer you'll feel the difference.

For 7b models, you'd need at minimum 6gb vram. And even then, you'd probably need a model of about 3gb large because not all 7b models are born equal.

So is this something your software will do automatically? In terms of figuring out context sizes, layers, blasbatchsizes, etc, to determine the optimal loadout for the best speeds? Are different quantasized versions of models available depending on VRAM availability?

(+1)

Hey! That's a good question and seems like something we should support, though right now we just have some hard-coded presets. Happy to chat more about it in our Discord if you're interested: https://discord.gg/kXuK7m7aa9

Is 16Go of RAM enough ?

Mm short answer is that I don't know. But is this for Web or Desktop? If Desktop, what computer are you using? 

Windows, Web, Opera gx

Got it. So I would say try it and see? But Windows Desktop is now also out so maybe that will work?

(+1)

Got this shit tagged for when the windows or android release comes out. I'd love to use an A.I program for my story boarding without having to pay 9.99 to get more than like 3 messages. Will actually donate too if it does.

(+1)

Thanks for your support, I actually recently made some progress on getting the build working. No date yet, but  I also really want a Windows build and am very sorry for the delay :(

No problem bud. This stuff is a lot of work, and I appreciate the effort you're putting in. Good luck!

Thanks! Finally got it out if you want to try :)

Sounds good my dude.

I know I'm the 10,000th person to ask, but when is it coming to windows?

(2 edits) (+1)

I know I feel quite bad, everyone is waiting :(  It is still my #1 priority, but I've had some issues with getting the libraries to compile (specifically linking them into HammerAI), and things in life have gotten crazy. I will have it out as soon as I can though, sorry everyone.

In case anyone is reading this and is interested, I'd be happy to bring on any contributors! We will keep desktop and web free forever, but I was thinking we could build a mobile version with some paid features and then split profits between contributors.

Okay well it took two months beyond that, but it's out now!

when are we going to get a windows release?

(1 edit)

It is my top priority right now! But have run into some issues with MLC-LLM that are taking more time to get working than I originally thought. The app itself runs fine on Windows though, just not yet the AI chat part. If you join the Discord I plan to post in there as soon as it's ready for early testers.

I guess the real answer is.. today!

(+2)

Following this with interest for a Windows release.

I love the fact you included a screenshot of someone instantly going for the "can I kiss you" with their waifu, you know your target demographics that's for sure.

(+1)

You found the easter egg 😂

(+1)

Windows is now out!

(+1)

oh and um, why doesn't not work with windows (for now?). please explain to me, i would like to know

I just haven't yet finished up the work to support it! There is no technical reason why it can't work on Windows, and I definitely want Windows support as soon as possible.

It's working now!

(+1)

dam one of the few moments where apple users have something cool that window users don't

True haha. But Windows is out now, so it's even again xD

(1 edit)

This seems awesome, so i wanted to try the web version, whenever i tried a character, it wanted to download the AI model but it displayed the message "Cannot find adapter that matches the request", is there a way for me to manually download the model or maybe i forgot to enable something on my browser? (I use Google Chrome)

Hmm, that is unexpected. 

It looks like these issues: https://github.com/mlc-ai/web-llm/issues/105#issuecomment-1594835134 & https://github.com/mlc-ai/web-llm/issues/128#issuecomment-1595151465, which are "likely because your env do not support the right GPU requested(due to older mac) or browser version" or "likely mean that you do not have a device that have enough GPU RAM".  

Could you try going to https://webgpureport.org/ and seeing what is says?

Also if you'd like to join the Discord it would be great to continue this conversation there: https://discord.gg/kXuK7m7aa9

(+1)

I think it really was a problem with my Google Chrome, cause i tried with Microsoft Edge (Good god...) and it seems to be working normally, i'll have to check what's wrong with it later, but in case you'd like some info, my GPU is an Nvidia GeForce RTX 3060 TI and my Chrome version was 117.0.5938.63 (it says it's the most recent), which what i read online, was supposed to support WebGPU

(+1)

Yada yada, same as all the other comments. Looks awesome.

(+1)

Today is the day...

(+2)

let me know when this is on windows

Will do!

(+1)

let me know when this is on windows (1) 

It finally happened :)

We're out now!

(1 edit) (+2)

i always loved playing AI dungeon back in the day, i'm looking foward for this to come to windows!!!

(+1)

We will let you know!

Today is the day!

(+6)

Also would love to know when this comes to windows! Super awesome to see :D

(+1)

Sounds good we'll let you know!

(+1)

We're out on Windows now!

(+2)

yes, can you also reply to me once it hits windows?

(+1)

Will do!

We're out on Windows now!

(+2)

let me know when this is available for windows.

Will do!

We're out on Windows now!

(+1)

Hoping this come to Windows soon

(+1)

I'll let you know when it does!

Well it took a long time, but it's out on Windows now!