Active Member b6d4a82c15 173 Posted April 19, 2024 Active Member Share Posted April 19, 2024 Hey guys, Has anyone of you tried feeding metin2 game/server source to some kind of LLM yet? I think best use case would be finding exploits / memory leaks. Gemini 1.5 Pro seems especially interesting because it has a large amount of tokens. Link to comment Share on other sites More sharing options...
Management ɛʟ Ǥʟɑçѳи 🧊 7991 Posted April 19, 2024 Management Share Posted April 19, 2024 I have this idea in mind but also with some of the forum content... It might be interesting but I have other priorities. 1 1 I don't respond to any private messages, except for messages regarding ad system issues... For everything else, please join the Discord server and open a ticket... Link to comment Share on other sites More sharing options...
Active+ Member Koray 2320 Posted April 19, 2024 Active+ Member Share Posted April 19, 2024 I had tried something similar before through custom GPT via ChatGPT, but instead of searching for bugs/exploits, I attempted to create a simple ping/pong system by providing all the details just to see what it could do for experimental purposes. However, speaking for ChatGPT, it doesn't automatically scan the entire content due to the large number of files. It only makes heuristic-based guesses based on the names or scans the files if specified. Even if it processes the entire content correctly, the whole processing takes a long time, so it starts skipping parts after a while until the main command is given. Additionally, even if you manage to achieve the desired result once, due to temperature imbalances, getting completely unrelated results in subsequent attempts is possible. Therefore, in summary, obtaining efficient results is not quite achievable. It's quite useful for conducting research on specific topics in specific parts rather than scanning the entire content. However, for now fully automating it is not very feasible, at least for ChatGPT. 2 1 Link to comment Share on other sites More sharing options...
Active Member b6d4a82c15 173 Posted April 19, 2024 Author Active Member Share Posted April 19, 2024 2 hours ago, Koray said: I had tried something similar before through custom GPT via ChatGPT, but instead of searching for bugs/exploits, I attempted to create a simple ping/pong system by providing all the details just to see what it could do for experimental purposes. However, speaking for ChatGPT, it doesn't automatically scan the entire content due to the large number of files. It only makes heuristic-based guesses based on the names or scans the files if specified. Even if it processes the entire content correctly, the whole processing takes a long time, so it starts skipping parts after a while until the main command is given. Additionally, even if you manage to achieve the desired result once, due to temperature imbalances, getting completely unrelated results in subsequent attempts is possible. Therefore, in summary, obtaining efficient results is not quite achievable. It's quite useful for conducting research on specific topics in specific parts rather than scanning the entire content. However, for now fully automating it is not very feasible, at least for ChatGPT. ChatGPT is obsolete today honestly. I use claude-3-opus LLM daily for commercial programming and it exceeds chatgpt by a mile. When it comes to token size it's like you said, not possible straight away with chatgpt. There's however a way to go around this with some additional layers of complexity like Cursor Editor although be warned its closed source. It's basically a fork of vscode that let's you ask LLMs about your code. Also you can ask it to write anything and it has the context of your entire codebase. I tested it a few months back and it wasn't bad honestly. I told it to explain some shit-code written by ymir that I didn't understand and it did quite well. 1 1 Link to comment Share on other sites More sharing options...
Premium Abel(Tiger) 1409 Posted April 20, 2024 Premium Share Posted April 20, 2024 You don't need to train it on your source code. You just need to take a good model and give it more context. For example I use Github Copilot to help me with some trivial task while coding. If I need info about the whole source I just use @workspace in the chat when I ask it a question and then he has more context about the source. Link to comment Share on other sites More sharing options...
Premium astroNOT 189 Posted April 21, 2024 Premium Share Posted April 21, 2024 There's no LLM atm that can "remember" as many tokens as a full game source requires, even gemini, it will only mean it has a long input stream, not that it remembers much of it, so we'll still have to wait Anyhow llms are definitelly good, for me gpts was the best at cpp for example 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now