Jump to content
  • 0

Why does this game run poorly?


Ebolaids

Question

This game runs at literally 23fps average.

 

Even GrindWars 2 runs at a steady 40fps on low settings on my laptop.

Even skyrim on medium runs at 30fps.

Even runescape WHICH IS MADE IN JAVA runs fine.

 

 

Intel core i5 3317u

Intel HD 4000

4gb DDR3

120GB SSD

 

 

What happened to real programmers? When games used to be optimized?

 

It almost feels like someone paid Obsidian to make the game run like crap to force people to buy new hardware. Im not even joking.

 

Cant EDIT: Title edited.

  • Like 1
Link to comment
Share on other sites

Recommended Posts

  • 0

It seems to me like they prioritized bug fixing over performance optimization for the final pre-release crunch. Hence the altered min/req specs and the lack of AA/character texture quality sliders etc.. And if they can get rid of the "bugsidian" label, then more power to them! :)

 

Wouldn't mind an optimization/performance pass, though, if possible. 

(Not entirely sure what the 'graphics quality' slider does. I cannot really detect any difference in either visual quality or sheer GPU/CPU fan pitch/volume [very sophisticated method of gauging performance, I know..])

This statement is false.

Link to comment
Share on other sites

  • 0

 

 

 

C and C++ are middle level languages, so comparing them to bloat like c# and java is a huge mistake. The 'fastest' game engines were made in C++. When comes to directx I wouldn't call it fast. It's just popular.

 

 

Wrong, the fastest games were writtten in Assembly language and eventually C, then C++.  C# and Java are higher level languages, but they are perfectly capable of producing very fast games, it all depends on the skill and knowledge of the developer!   And regarding DirectX, it is as fast as possible to write to a graphics card without creating your own device drivers!  Do you know how DirectX functions under the hood?  I do!

 

 

Which ones were written in assembly language? I hope you're not talking about pre 95' engines? I'm dying to see fast engine made by c# or java (or any other application). It doesn't only depend of the skills and knowledge, but garbage collector, JIT and other language related factors. A perfect c/c++ game engine will be much better (and faster) than perfect c#/java engine. The later have larger memory footprint and memory allocation is much faster in C/C++. nVidia claims Linux can achieve 300% more performance in OpenGL than Windows with dx. They were probably thinking about upcoming Vulcan (the new OpenGL, but not  exactly). It will give much more power to developers. It will be even possible to control graphic card directly from the engine level;.

Edited by Pawel.pc44

Good ol' Interplay: "By gamers for gamers". Todays: "By businessman for money!"

Link to comment
Share on other sites

  • 0

 

 

Could you point us to a PC game which is using the Unity engine and works well? PoE sucks, Wasteland 2 sucks. It seems there are no problems with Unreal, Quake, Crytek or Serious Engine. Do you really think developers can mess Unity so much it works like that? I'm not saying you're mistaken, but maybe there's something wrong with this engine? Maybe it's harder to make optimized game with it?

 

 

PoE works fine, even maxed out on my gaming laptop, perhaps your computer is the issue?   Until you are able to see the code, you are just blowing smoke.  Unity has MANY successful games, it has for many years,  how about you post some reliable counter?

 

 

My PC is fine. I can run FarCry 3 on highest details. My frame drops aren't graphic related. Game slows down for a moment after entering new location or loading a save game. It's cache issue I think. You didn't answer my question.

Edited by Pawel.pc44

Good ol' Interplay: "By gamers for gamers". Todays: "By businessman for money!"

Link to comment
Share on other sites

  • 0

My rig:

Xeon E3-1230v2

EVGA GTX 970 SSC

8 GB DDR 1600

500 GB SSD Samsung 850 Evo

 

Playing in 1080P.

I can play Dragon Age Inquisition maxed out with consistent 60 fps !

Nearly every game there is!

 

Going through Defiance Bay & Twin Elms frame rate varies from 45 - 60 fps.

Every outdoor & dungeon scene at consinstet 60 fps though.

But this sh*t is ridiculous. Optimise your engine please! This should not be an issue at all....

Link to comment
Share on other sites

  • 0

Did they not add an AA slider now in the 1.03 patch?

 

@jaydee.2k

 

Running the same GFX series as you and its silky smooth here. Tough I got 16BG of Ram and an i7 3770k. CPU wise i got maybe 10% on synth tests compared to your Xenon, and RAM wise you should be good anyway. I also did a clean install of Win7 a month back, as i hate OS / software bloat that will always happens over time, heh.

You only running 1 SSD, no HDD for storage? I am assuming that the game is installed to the SSD of course...

Edited by navlasop
Link to comment
Share on other sites

  • 0

I installed Win 7 fresh 14 days ago because i bought the ssd for Pillars.

Got a HDD for storage for sure but Pillars is on the ssd of course....

First game i got performance issues with. This is bad......

 

EDIT: The AA Slider yeah but it doesn't influences performance at all on my side because

          my gpu Usage is between 20-30 %. This game is clearly CPU bound....

Edited by jaydee.2k
Link to comment
Share on other sites

  • 0

 

 

 

 

C and C++ are middle level languages, so comparing them to bloat like c# and java is a huge mistake. The 'fastest' game engines were made in C++. When comes to directx I wouldn't call it fast. It's just popular.

 

 

Wrong, the fastest games were writtten in Assembly language and eventually C, then C++.  C# and Java are higher level languages, but they are perfectly capable of producing very fast games, it all depends on the skill and knowledge of the developer!   And regarding DirectX, it is as fast as possible to write to a graphics card without creating your own device drivers!  Do you know how DirectX functions under the hood?  I do!

 

 

Which ones were written in assembly language? I hope you're not talking about pre 95' engines? I'm dying to see fast engine made by c# or java (or any other application). It doesn't only depend of the skills and knowledge, but garbage collector, JIT and other language related factors. A perfect c/c++ game engine will be much better (and faster) than perfect c#/java engine. The later have larger memory footprint and memory allocation is much faster in C/C++. nVidia claims Linux can achieve 300% more performance in OpenGL than Windows with dx. They were probably thinking about upcoming Vulcan (the new OpenGL, but not  exactly). It will give much more power to developers. It will be even possible to control graphic card directly from the engine level;.

 

 

Plenty of games were made with Unity that perform well, they can be seen here (not all of them are great but many are, again it depends on the developer skills with the engine): http://unity3d.com/showcase/gallery

 

Everything I did in the 80's was in pure Assembly, this was on the C64.  When I moved to the Amiga it was Assembly and C, eventually C++.  In the 90's it was C/C++ but Assembly was used especially in 32 bit DOS games where I wrote my own bank switching code that sat on top of VESA.  All my device drivers were 100% assembler and ran in Ring-0 and included bi-modal IRQ handlers to reflect IRQ's up to higher level protected mode code.   For Windows games, we used the GameSDK (this was the pre-cursor to DirectX) so that we did not have to write bank switching code for 8 million different video cards.  The games themselves were written in C or C++.   In the 2000's we used mainly C++ especially on applications such as Adobe Photoshop, and we had our own custom framework that ran on both Windows and the Mac.

 

I never said the game engines were written in C# or java, but the game logic certainly can be and is done a lot of the time nowadays.   DirectX goes directly to the hardware, so it is very fast.  OpenGL might be faster under Linux, but that is mainly because Linux is lightweight compared to Windows, meaning Windows has to ship with support for all kinds of hardware, where Linux can be configured to only support what is currently in the hardware leading to much better performance.  Unfortunately, there is no real market for games in Linux when the vast majority of gamers use Windows, so the nVidia point is moot.

 

nVidia used to have a big advantage in Windows because they had very good connections to low level developers at Microsoft (I had access to the same developers when  I was working on video drivers for video editing hardware).  Some of those good developers left Microsoft and moved to AMD about 10 years ago, so AMD device drivers (which were notoriously bad) have become better.

 

You are right that there are options for the developers to query/enumerate the hardware features, then have the code dynamically use that knowledge to boost performance with vertex buffers, texture sizes, shader logic, etc,  but very few developers (especially the Indie devs) have the knowledge and skill to do this.  And that IMO is why you can have game X perform like crap and game Y perform great even when both are using the same engine.   This is even more true on mobile devices where things like texture and vertex management become extremely important.

 

FWIW:  I am glad to have had the chance to get into development back in the late 70's.  Back then we packed quite the punch on very limited hardware like the Atari consoles and 8 bit computers like the C64 etc.  We did that by using mostly assembly code and using various hardware tricks, this is a lost art today.  Games and Applications have become bloated crap, and very few people innovate, most use 3rd party SDK's or worse they use code on the internet that they do not understand, especially for AI, pathfinding, and high speed rendering algorithms.  Sure, the graphics and sound are much better,  but not enough to explain the difference in size/bloat.  It's not that programmers don't want to make small and fast code, but today deadlines are short and investors/publishers are more interested in a cash grab than writing some solid code.  It is always great to roll your own engine, but very few projects will allow that, and if they do you can count on them to try and release as many games as they can with it, even if the engine becomes old and outdated!  That is why 3rd party engines are useful,  the developers of Unity, Unreal, Torque, and other engines can focus on optimizing the low level stuff while the game developers can work on the game logic.  BUT, they should have a VERY GOOD understanding of how the engine works so that their code does not run like crap with it.

Edited by StubbinMyToe
Link to comment
Share on other sites

  • 0

There's something wrong with the game performance. I can't run most maps at 60 FPS, especially in Defiance Bay and underground areas like sewers and dungeons. These places runs between 30 and 45 FPS and in combat I get some 1s drops below 30 FPS. Changing resolution doesn't do anything: the framerate is the same at 720p, 1080p and 4K! Also, loading times are terrible: the first one takes 1 minute and when changing maps 15-20 seconds.

 

The temps are low and the CPU usage doesn't go above 35% (this is probably the issue here).

 

This is my computer:

 

AMD FX-8320 3,5ghz

8 GB DDR3 1600mhz

Nvidia GTX 760 2GB GDDR5

Windows 8.1 x64

I fixed the problem by uncheking the Fullscreen option in the graphics menu. Can you believe it?

 

The game still runs in fullscreen (borderless fullscreen), but the performance improved a lot. Now I get constant 60 FPS anywhere but in Copperlane like most ppl.

Link to comment
Share on other sites

  • 0

 

Nice try but wrong, looking at frame rates does not tell you anything about the engine because you do not see the code in a profiler to see where the bottle necks are.  There are ways to control your memory footprint and other things if you know what you are doing, people do it all the time on mobile devices. 

 

Have you developed any native engines?  I have, many times over the past 30 years as a developer.

 

It's incredible how arrogant you are. You don't need to be able to write code to observe how **** an engine performs, especially when you have other engines to compare it to. Now, if you want to understand WHY it performs like ****, then yes you'll be able to understand the code the engine is written in.

Explain to me why all the Crysis games run perfectly stable (except the first one, but that one is famous for bad optimization), why Dark Souls 2 (dx11) runs at perfect 60fps without even one dip with a lot of particles and models on screen, why Dota 2 (a ****ing RTS-style game) also runs perfectly stable while having A LOT more going on than Pillars of Eternity. Even Arma 3 runs at constant 40+, and the arma engine is famous for being absolutely awful when it comes to performance.

No, you don't ****ing need to understand code to realize that the Unity engine is complete ass at efficient resource usage.

I hate Unity.

Link to comment
Share on other sites

  • 0

OK, on top of the generally poor performance of the game, there also appears to be memory leak issues which degrade performance over a play session:

 

http://forums.obsidian.net/topic/73539-performance-drop-after-a-few-map-transitions

 

Is anyone else noticing performance degrading over time? I would guess it would be more noticeable on weaker PCs since more powerful machines have more headroom before performance is affected.

Link to comment
Share on other sites

  • 0

 

 

Nice try but wrong, looking at frame rates does not tell you anything about the engine because you do not see the code in a profiler to see where the bottle necks are.  There are ways to control your memory footprint and other things if you know what you are doing, people do it all the time on mobile devices. 

 

Have you developed any native engines?  I have, many times over the past 30 years as a developer.

 

It's incredible how arrogant you are. You don't need to be able to write code to observe how **** an engine performs, especially when you have other engines to compare it to.

 

 

I have already explained IN DETAIL, go read it.  I am not arrogant, I just have a low tolerence for people who make statements without having any knowledge about software development.  I am done with you, come back when you have a clue.

 

Edit:  I finally found out how to ignore this kid.

Edited by StubbinMyToe
Link to comment
Share on other sites

  • 0

just found out something interesting: when I go to 'the fireplace' where my FPS is downright unplayably poor, my CPU-use in Task Manager is around 20 percent. When I move away, not only my FPS goes up, but my CPU use jumps to the lower 40 percent!

To quote a great philosopher: What the...?!?!?

 

Anyone knows what is going on underneath that hood of that game? Because I really have no ideas anymore

Link to comment
Share on other sites

  • 0

I am able to play this on an IdeaPad U310 ultrabook without major issue. I had concerns that my Intel graphics would bottleneck the game, but was pleasantly surprised it runs as well as it does (albeit on a relatively native low resolution). There are times the game stutters, and I'm guessing frame rates are not stellar (I haven't run a count), but it is definitely playable -- and I'm loving it!

 

I hope other players find similar success on their lower spec rigs! 

 

Specs:

  • i5-3317U (1.7 GHz)
  • Intel HD 4000 graphics
  • 4 GB RAM
  • 1366x768 (this low resolution may be my saving grace?)
Link to comment
Share on other sites

  • 0

 

 

 

Nice try but wrong, looking at frame rates does not tell you anything about the engine because you do not see the code in a profiler to see where the bottle necks are.  There are ways to control your memory footprint and other things if you know what you are doing, people do it all the time on mobile devices. 

 

Have you developed any native engines?  I have, many times over the past 30 years as a developer.

 

It's incredible how arrogant you are. You don't need to be able to write code to observe how **** an engine performs, especially when you have other engines to compare it to.

 

 

I have already explained IN DETAIL, go read it.  I am not arrogant, I just have a low tolerence for people who make statements without having any knowledge about software development.  I am done with you, come back when you have a clue.

 

Edit:  I finally found out how to ignore this kid.

 

 

You're missing one obvious point: Obsidian devs aren't noobs who don't know how to optimize a game. They have many good games on their account already. For example Fallout: New Vegas ran better than "fall_oblivion_out 3" on my old PC. :biggrin:

Good ol' Interplay: "By gamers for gamers". Todays: "By businessman for money!"

Link to comment
Share on other sites

  • 0

There's something wrong with the game performance. I can't run most maps at 60 FPS, especially in Defiance Bay and underground areas like sewers and dungeons. These places runs between 30 and 45 FPS and in combat I get some 1s drops below 30 FPS. Changing resolution doesn't do anything: the framerate is the same at 720p, 1080p and 4K! Also, loading times are terrible: the first one takes 1 minute and when changing maps 15-20 seconds.

 

The temps are low and the CPU usage doesn't go above 35% (this is probably the issue here).

 

This is my computer:

 

AMD FX-8320 3,5ghz

8 GB DDR3 1600mhz

Nvidia GTX 760 2GB GDDR5

Windows 8.1 x64

 

My computer

 

AMD FX-8350 4.2 GHz

16 GB DDR3 1600MHz

AMD 7950 3GB GDDR5 (x2)

Windows 8.1 pro

1440p monitor

 

 

Defiance Bay maps like Copperlane still fall beneath 60fps. Game only supports one GPU, but it never goes above 50% but the CPU is sometimes using all 8 cores at 50-60% which is quite uh impressive for such a game as this. There is definitely room for optimization.

 

But it plays very similarly to Wasteland 2 in my experience, which I wasn't all that thrilled about either. 

 

I don't know, maybe it's the resolution and any optimization just falls apart at higher than 1080p, but there just isn't enough happening in the game or on screen to justify using up tp 50% of an HD7950 and an 8 core FX8350. This game sucks up resources.

Edited by ShadowStorm
Link to comment
Share on other sites

  • 0

You're missing one obvious point: Obsidian devs aren't noobs who don't know how to optimize a game. They have many good games on their account already...

I'd add Neverwinter Nights 2 as another example - though it was a beast of a game when released, it did far more graphics-wise than Pillars (choice of variable overhead viewpoint or third person viewpoint) and with far lower hardware requirements (2.4GHz CPU, 512MB RAM, 128MB GPU).

 

Indeed it seems rather a pity that Obsidian didn't develop the Aurora engine used for NWN2, but presumably that would have required more licensing from EA/Bioware.

 

It's likely a fair point to say Unity3D is not being used efficiently since it is treating a 2D-2.5D background as a (large) collection of 3D objects (120,000+ in Gilded Vale). But at the same time, it was Obsidian's choice to use it so whatever the balance of inappropriate engine/poor implementation is, the outcome is still all their doing.

Link to comment
Share on other sites

  • 0

Doesn't look like there were any performance improvements in 1.05. Copperlane and Skaen Temple still run horribly for me.

 

Oh, and the performance degradation after a few map transitions is still present, perhaps worse.

Edited by Justinian
Link to comment
Share on other sites

  • 0

I finally had the opportunity to reinstall my rig and switch around mainboards (as I am self employed working from home, I don't get to do that too often..)

 

PoE now finally runs smoothly on my rig. Still wondering what caused the problem though...

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...