Jump to content

TheEqualizer

Member
  • Posts

    39
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by TheEqualizer

  1. Implemented depth of field, with hexagonal bokeh.
  2. You have a bug somewhere. The info you provided is not enough to pinpoint where the bug may be. The large allocation is a big red flag, what may be happening is that a std:vector (or some other container) is growing until you run out of memory (if this is the case, you should get a std::bad_alloc exception). The debugger can show you the code that raised the exception. Since the client becomes unresponsive, maybe you are running into an infinite loop situation (and running out of stack space) Use the debugger well and you should be able to find the bug.
  3. Implemented screen space ambient occlusion. In the images below, the effect was exaggerated a little to make it more visible.
  4. I don't know what you changed but you better revert the changes because you made the problem worse. std::__throw_bad_array_new_length() is an internal GCC/libstdc++ function. So, it's possible the GCC compiler is attempting to use this function, but the linker can't find it in the libstdc++ library you are using. So this tells me something is wrong with your compiler installation/configuration. You might want to install a different gcc package or use static linking.
  5. These appear to be linker errors. Make sure the linker is properly configured and that you are not using incompatible libraries.
  6. Yes. The renderer is responsible for everything, anything that doesn't go through the renderer is not rendered. Effects/Particles are pre-processed before submission so they can be rendered as efficiently as possible. I have tested having many effects at once and the impact was minimal. One of the problems with the way Ymir renders things is that it sends very little work to the GPU (per draw call). GPUs work better when a large amount of work is sent because it allows the GPU/driver to better hide the latencies involved.
  7. Multiple light shadows are supported, but I don't expect this to be used.
  8. Implemented soft shadows using variance shadow mapping. Shadows add quite a bit of cost when many dynamic objects are visible, so this is a good candidate for multithreading. There is decent self-shadowing as well.
  9. I decided to test the performance of the renderer when all the light slots are used. Right now the renderer supports a maximum of 17 lights (1 directional and 16 spot/point lights). In the test scene below, all 17 lights are active. Performance was not affected. So, I will increase the maximum number of lights to 25 (1 directional, 24 spot/point lights). With some clever light management, it's possible to support much more than this, so I might revisit this later.
  10. If you are using visual studio, then right-click the project these files belong to (probably "userinterface") and then select [Add > Existing Item...] or highlight the project and press [Shift + Alt + A]. Then select the files and visual studio will add them to the project.
  11. Those are linker errors. I don't understand the language, but I believe the compiler is not finding the definitions for those 3 symbols. This likely means you have not added the (.cpp) files that contain the definitions to your solution/project. Make sure your solution was setup properly.
  12. Did these problems exist before you fixed the black screen problem? What happens if you remove the "fix"? How did you fix the black screen problem (are you sure you didn't change something you weren't supposed to)? In the first video, it looks like the camera position is changing when you preview. I assume this "preview system" needs to change the camera to create/render the preview, then the camera has to be restored to how it was before the preview so the game can be rendered normally. Are you sure this process is happening? In the second video either the texture for that effect was not loaded or the render states were not setup properly for the effect (you should start by checking the blend states). EDIT: OK, after checking the first video some more, I realized that the camera isn't changing, the water is turning white, which means, that your render states are not setup properly. Somehow, when you activate the preview, you are causing the render states for water (and possibly effects as well) to become invalid/wrong. Again check your render states.
  13. This could be a lost device situation but to be sure you have to check the return value of Direct3D API calls (as I said, start with Present()). If you confirm that it is a lost device situation, then maybe your render target system is not releasing it's render targets when the device is lost. When a device is lost, all D3DPOOL_DEFAULT resources must be released before the device can be reset.
  14. There is not much to go on here. A black screen means your render functions are either silently failing or are not being called, you need to figure out what the problem is first. Running in debug may help. Log return values from Direct3D API calls, and if any of them is failing troubleshoot it. Start by checking the return value of the Present() function.
  15. What do you mean by "fog level"? Distance? Density? If you are using the fixed function pipeline, you can experiment with the fog related render states (D3DRS_FOGCOLOR, D3DRS_FOGTABLEMODE, etc).
  16. MSAAx8 seems excessive to me. I tested MSAAx8 (in my 180 characters test scene), at 2560x1440 resolution, and there was barely any performance difference relative to MSAAx4 or MSAA off. I think either you have a driver problem or Nvidia must have some special optimization for MSAA. If you are using D3D9, then this could be a driver issue (since Intel does not have a good D3D9 driver, I think they use an emulation layer). Also, I seem to remember Intel saying that ResizeBAR was necessary for good performance with ARC GPUs, so if you don't have that enabled, it could be the reason of the performance hit.
  17. When combined with FXAA I think it produces the best quality. In the future I might add other forms of anti-aliasing.
  18. Added anti-aliasing support. "MSAA" is MSAAx4 (I chose this mode because all D3D11/D3D_FEATURE_LEVEL_11_0 GPUs are required to support it, so there is no need to check if it's supported). FXAA does a decent job of removing aliasing but introduces some blurring. MSAA+FXAA provides the best quality. SMAA can be combined with MSAAx2 (a mode called "SMAA S2x" by the SMAA authors), but I chose not to implement this. MSAA was used here only for comparison, only FXAA and SMAA will be supported. The reason is that MSAA will make supporting other features more difficult later.
  19. The renderer now has a D3D11 backend. My initial intention was to have only a D3D11 backend, but I ran into some problems with D3D11, so I decided to have a D3D9Ex backend while I worked on the D3D11 backend. This week I finally got the D3D11 backend working, so the D3D9Ex backend will be deprecated, and all development will move to the D3D11 backend. Now with D3D11, mesh deformation is performed once, in a compute shader.
  20. That is really good performance. What hardware did you use for your tests? Maybe you could share your scene? Are you sure that in your tests you didn't have other things active (like shadows)? 100fps is good performance, I don't think you need instancing, but implementing it is not a bad idea. I am not using instancing. Also multithreading the mesh deformer would require a lot of syncing, since a vertex buffer lock is performed before deforming (and D3D9 functions cannot/shouldn't be called from multiple threads, unless you create the device with D3DCREATE_MULTITHREADED flag, which causes the runtime to perform the syncing for you). The best way to render is to avoid frequent state changes and frequent locks. Shader Model 3.0 supports vertex texture fetch, and you can use it to prepare one big texture containing data for many meshes, to efficiently render them in batches.
  21. Your question is interesting. When you have the source you don't need players or servers to make things show up on screen. Both clients were changed to allow me to load maps, spawn entities, control the camera, etc without needing a server or other players. All entities are as "heavy" as they normally would be if loaded/used in real gameplay. Because each client is using slightly different files. The version I am using for my renderer is the one I use normally, but then I downloaded another client just for comparisons, I didn't even realize some of the textures were different until I started testing. A couple of different textures has no effect on the results.
  22. So, I am building a renderer from scratch, mostly for fun, and I will post my progress here. Before I try anything fancy, I will be testing how efficient my current renderer is. So, I will compare my renderer against a vanilla D3D8 version of the client. To stress the render system, the test scene will contain 180 characters and 60 Metin stones, the resolution will be set to 1600x900 for both clients. I will compare render times, lower is better. Results: Vanilla D3D8: 18 - 20 ms; Experimental Renderer: 4 - 5 ms; My renderer is 4x faster. This is a good start.
  23. It could be, I can't tell from just the data you posted. Which is why I said you should check the logs. You need to understand what is going on.
  24. You are probably triggering an assertion. Run in debug, it should help you find the offending code.
×
×
  • Create New...

Important Information

Terms of Use / Privacy Policy / Guidelines / We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.