Jump to content

Recommended Posts

Posted

I've seen in combats of some games, that enemies don't take into account penalties for disengagement, trying to reach one combatant and suffering lots of damage from other people close to the stupid enemy, who results dead  even before reaching the goal.

 

I've seen this, for example in Expeditions conquistador: enemies tried to reach my doctor (weaker than the rest of my group, being in the rear of my party) suffering a big deal damage from my warriors. I've seen this recently in Age of Wonders 3 too.

 

This stupid behaviour ruins the realism of combat. Watching that in Pillars of Eternity there are disengagement penalties, I hope AI will be enough smart in order to avoid those disengagement penalties.

Posted

I agree, but the problem is very non-trivial to solve.

 

The obvious solution is not acceptable (to me, at least):  That the AI, once engaged, remains engaged with that enemy until it is dead / incapacitated.  Such a strategy will be exploited by the player to ensure that enemies never attack anyone other than high HP / high armored companions, eliminating any risk associated with having "Glass Cannon" (I'll refer to this group as "high value targets" in the remainder of this e-mail, because that's what they are) character types.  Clearly, the AI should be able to attack any member of the player's party, and should focus their attacks on party members that can be taken out quickly.

 

But if the enemy relentless attempts to pursue high value targets, the player will exploit that by taking advantage of disengagement penalties -- in effect, creating a "maze" (in tower defense terminology) where very few, if any, of the enemy will successfully reach their intended targets.  This is worse than the first case -- rather than doing some damage (even if it is futile) you end up with enemies that do no damage at all.

 

I don't have a clue as to how to solve this in the general case -- in specific cases, scripting can be used to guide enemies along "protected" paths that the player cannot block, or simply to ensure enemies attack the player from multiple sides.  A general solution, though, would require pathfinding that takes into account the possible movement of the player's characters, other enemies, and the like and would quickly become very complex.  Theoretically, though, the AI should send one or two enemies directly at the high value targets, while sending others on indirect paths (ideally, at least two).  This forces the player to divide his warriors to engage multiple groups of enemies across a wide front, creating a situation where a "maze" cannot be created.

 

But that's really hard to do, given that simple "Get from point A to point B" pathfinding isn't consistently reliable. :(

 

One potential solution could be found by creating an invisible "cost / value field" around all characters -- positive numbers discourage the pathfinding from moving into these locations, while negative numbers encourage the pathfinding to move to those locations.  Low value targets (high HP, high AC, wielding melee weapons, good chance to hit, lots of attacks of opportunity) will be surrounded by high positive values, while high value targets will be surrounded by negative values.  As the pathfinding algorithm tries to find a path, it expands these fields to reflect the passage of time while reducing the weights towards zero (think of an explosion -- the force of the explosion diminishes as its effects are felt over a larger area).  Once a path is generated, it is checked periodically (once a second, say) and recalculated based on the actual positions of the combatants.

 

Of course, consider the scenario where the low value targets charge forward at the enemy -- the algorithm calculates that the best path is to retreat (avoiding engagement), then turn around and try to flank and reach the high value targets.  But if the player keeps advancing, the AI will keep retreating, and you'll end up with the player having to chase enemies all across the map to engage them, which wouldn't be fun.  So then you have to add a check to ensure that the AI keeps track of how long it has been since it engaged the enemy, and do something else if too much time has passed, but then you are back in the situation you started with... :(

 

Like I said, this is a hard problem to solve.

  • Like 1
Posted

A hard problem to solve that can ruin the combat experience. I don't want to see enemies triying to reach my wizard, passing through my warrior tanks and suffering a lot of free damage.

Posted (edited)

This was a concern voiced earlier when engagement was first described. Ultimately, the devs were told by many players that the AI needs "to be good" but nobody really came up with a strong algorithm. As mentioned, it's tough to do.

 

Ultimately, if I was to think through this, I think there would need to be a risk/reward algorithm in place for the computer to decide whether the risk of disengaging from current enemy is worth the reward of engaging in another action. How one does this might be tough to figure out.

 

Considerations in such a risk/reward question:

Damage done by engaging enemy.

Damage done by "protected enemy" whom AI is currently not engaged with.

Health remaining (even if it's roughly approx.) of both engaging and protected enemy. An almost dead enemy would have a higher value than a uninjured enemy.

Randomization factor.

Whether protected enemy has another AI currently engaged (archers or melee).

Etc.

Edited by Hormalakh

My blog is where I'm keeping a record of all of my suggestions and bug mentions.

http://hormalakh.blogspot.com/  UPDATED 9/26/2014

My DXdiag:

http://hormalakh.blogspot.com/2014/08/beta-begins-v257.html

Posted (edited)

It's not that hard I think, if you take into account that the designers also control encounter design. Just include a suitable mix of ranged and melee units in each encounter. Give them two stances: aggressive and defensive. In aggressive stance, the melee units will go after high-value targets preferably, but if engaged before they get there, stay that way until incapacitated; the ranged units target the high-value ones and attempt to stay clear of being engaged by melee units. In defensive stance, the melee units pair with the ranged ones to shield them, intercepting and engaging any player-controlled melee units that attempt to go after them.

 

On an open battlefield, this would turn into a situation where you have lines of durable melee units slugging away at each other while ranged units shoot at each other, concentrating fire on highest-value targets on either side. Which is, more or less, how pre-gunpowder infantry tactics actually worked. Throw in some shock units with suitable AI, and you've got a recipe for challenging and engaging combat.

 

Even applied as simply and as mechanically as this, it would make for a wide variety of interesting encounters by varying the specifics of unit composition and terrain. If you add a bit of higher-level AI so that e.g. it recognizes gaps in your line and tries to punch through, or weaknesses to your flanks and attempts to outflank, and it could get quite hairy indeed.

 

Also, it's not like this is anything new; RTS's with decent AI have done more or less this for years -- and P:E, being an RPG rather than an RTS -- doesn't even need to do it that well.

Edited by PrimeJunta
  • Like 2

I have a project. It's a tabletop RPG. It's free. It's a work in progress. Find it here: www.brikoleur.com

Posted

Thing is, what are WE gonna do to reach the wizard when he is protected by fighters ?

Matilda is a Natlan woman born and raised in Old Vailia. She managed to earn status as a mercenary for being a professional who gets the job done, more so when the job involves putting her excellent fighting abilities to good use.

Posted

Always shoot the wizards first.

 

tumblr_lx3e89X1Pu1qjz023o1_500.png

  • Like 1

"It wasn't lies. It was just... bull****"."

             -Elwood Blues

 

tarna's dead; processing... complete. Disappointed by Universe. RIP Hades/Sand/etc. Here's hoping your next alt has a harp.

Posted

Thing is, what are WE gonna do to reach the wizard when he is protected by fighters ?

 

Send in the rogue character and use the escape ability, use a barbarian's wild sprint ability, and/or focus multiple ranged attacks on the wizard. A ranger's marked prey ability would help with the last. I think.

"It has just been discovered that research causes cancer in rats."

Posted

So, granted our enemies use similar strategies, party mages are deep in their shoes for trouble  :p

Matilda is a Natlan woman born and raised in Old Vailia. She managed to earn status as a mercenary for being a professional who gets the job done, more so when the job involves putting her excellent fighting abilities to good use.

Posted

So, granted our enemies use similar strategies, party mages are deep in their shoes for trouble  :p

 

This comes back to the pathfinding issue, though -- the reason that players can implement strategies like this is because we will pause the game and micromanage characters to avoid engagement before they reach their intended targets.  In large part the reason that we can do this is because the AI "strategy" is basically "Find the closest enemy and attack it -- if you can't attack it, move closer to it."  When the player attempts flanking maneuvers, the result (barring obstacles that prevent the player's characters from moving as they wish) is that regardless of where the enemy started combat, they end up trailing their target, which allows the PC to reach their intended targets.  Obviously, once they stop to engage their targets the trailing enemies will catch up, but at this point it is too late:  the PC will make short work of the high value targets turning the remainder of the combat into an exercise in mopping up.

 

So we need an AI that can achieve three goals:

 

1) Intercept hostile meleers well in advance of reaching their high-value targets, even when they follow indirect paths.  This means that the AI must reliably position defenders to intercept flanking attempts, and maintain this coverage when / if the units being defended reposition or split-up (which might be necessary to gain line-of-site on their ranged targets, or simply to get within range).

2) Evade any low-value defenders that the player might have in order to engage the player's high-value targets.  This means that the AI needs to identify indirect paths to their targets, as straight line paths will all but guarantee successful interceptions.

3) Intelligently allocate resources between these two roles based on the actions of the player.  For example, if the player sends all of their low value units to attack, then most of the potential defenders should remain "at home", but at least one (& likely several) should attack.  After all, the AI should understand that once the players high value targets are engaged in melee, some of the attackers will have to disengage to rescue them.

 

To be clear:  This is absolutely something that Obsidian should spend a good deal of time working on, regardless of how difficult it is.  Given the mechanical changes that Obsidian has already announced, failure to have a very good AI (one that can achieve the above goals) is an absolute necessity if the player is ever to feel threatened by their opponents.  With no long-lived buffs, no "save or die" effects, nor even any long-lived disabling effects, if the AI cannot consistently threaten low HP / low armor targets with melee attacks then every combat will either be trivial (assuming more or less equal capabilities on both sides) or exercises in grinding (assuming that the opponents have much greater capabilities, but use them poorly enough that the player can still achieve victory).

Posted

I've said it before, but I think the best way to do this is to establish a pattern, then introduce a sprinkling of random "breaks" in that pattern.

 

If 4 out of 5 times that foe always strikes out after the closest opponent, for example, but that 5th time, he simply doesn't, the player's going to suddenly become very aware of the fact that that tactic isn't ALWAYS going to work, and be much more prepared to react and adapt to whatever it is the foe is doing at the time, rather than just figuring out patterns and countering those same patterns every time.

 

It's basically how humans do things. Look at the old example of two intelligent generals clashing on the battlefield. They know all the good tactics, so one uses an "obviously" stupid tactic. Well, now, is that "clearly" a feint? The other general KNOWS that's a terrible tactic, but he also knows that the general USING the tactic knows this. So, why is he doing it? If it's a feint, and he assumes it is, then he'll counter whatever ambush or mystery attack is going to occur. Or, what if that general actually hopes the other will think it's a feint? If that's the case, and it isn't a feint, then it'll actually be a good tactic, only because the assumption of its being a feint has changed the circumstances of the battle.

 

We narrow things down to a list of feasible possibilities (no one's going to just strip their men of arms and armor and tell them to run in and try to punch the other army to death, for example, not even as a feint, because they'll just all die and not accomplish anything), then we choose what to do from there. But we don't always choose the same thing.

 

With humans, we're always choosing for a reason. But, an arbitrary break, in the case of code, would simulate much the same thing. Plus, the human isn't going to be readily aware of the specific numbers used in the code; you won't know how often a kite tactic would work, and how often it won't work. So, even and every time you have the opportunity to kite a foe, or not-kite that foe, you have to guess, just because the AI follows a pattern less than 100% of the time.

Should we not start with some Ipelagos, or at least some Greater Ipelagos, before tackling a named Arch Ipelago? 6_u

Posted

<snip>

Like I said, this is a hard problem to solve.

 

It's hard because the goals you set are too high-level and too long-term to calculate and execute in one fell swoop. You need to break the down the goals into more frequently executed smaller ones that, when chained together, have a much better chance of yielding the expected results. Breaking the the problem into smaller parts is a powerful and oft-used tool in  mathematics.

 

-- Getting from my position to an enemy position shouldn't be a singular action. Every few steps, there should be a re-evaluation, and if things changed (the player moved a blocking unit into my path, I got Hobbled, reducing my movement speed, etc.), my actions should respond to that. Don't write AI based on getting to a certain enemy as a singular action, write AI for taking a few steps for a desired goal, whether closing in on an enemy, or flanking one, or trying to avoid the melee control zone of enemy lineholders, and so on. That's manageable, the response to player tactic will come sooner, and it'll make the player's job a lot harder.

 

-- IMO emergent AI is the best way to implement high-level gameplay concepts in games. It works by communication, every entity telling all the others and its "superiors" (which are usually virtual, not tied to actual units, only exist for calculation reasons) about what it wants to do, and then a negotiation process determines which action is taken in the end.  If you want to push calculations like "evade low-value defenders" onto a self-contained unit AI, it'll be complicated and probably still easily exploited. If instead you create a higher, emergent layer above the individual units, then those units only worry about low-level goals, the high level goals are only a means of assigning relative values to actions. It's somewhat similar to actual, real-life small scale combat: you have your orders ("take that bunker atop that hill"), but those orders don't tell you how to do that. So the soldiers in the platoon talk to each other about how to best solve that. The command only gives directives, it does not provide step-by-step solutions.

 

Here's a textual mock-up, ignoring the  communication with the command layer to make it easier to follow:

 

Unit Crushy 1: I want to move forward, I see a squishy target, no obstructions. I rate this action 8/10 according to high-level goal "gank enemy squishies".

Unit Crushy 2: I want to move forward, I see a squishy target, no obstructions. I rate this action 8/10 according to high-level goal "gank enemy squishies".

Unit Shooty: I want to shoot. But I also see threats. Three enemy units are trying to reach me by flanking to the left, based on their current movement. Crushy 1, you're close to their path, I rate your interception 9/10 according to high-level goal "protect our own squishies". Crushy 2, you're somewhat close to their path, I rate your interception 7/10 according to high-level goal "protect our own squishies".

Unit Crushy 1: 9>8. Will do.

Unit Crushy 2: 7<8. Ignoring request.

Unit Shooty: If Crushy 1 will intercept, then I will shoot.

 

End of negotiation, final actions:

Unity Crushy 1: Moving to intercept enemy gankers. 

Unity Crushy 2: Moving to gank enemy squishy.

Unit Shooty: Shooting.

 

Neither the Crushies nor Shooty "thinks" about which high-level goal is more important. Those values are given to them by AI in the superior/command layer. The values are not static, they are evaluated in every tick/heartbeat by a similar negotiation process in the command layer. In that layer, the entities represent the high-level goals, and they "argue" about which one is more important right now.

 

For a tactical game like Eternity, two layers should be enough. For a strategy game, even more layers could be necessary to create a good competition for a human player.

 

----

 

The last time I wrote about good computer game AI, I gave praise to Stardock's Galactic Civilizations series (hi Brad Wardell,,one of the very few people in the games industry who creates smart AI), but forgot to namedrop the other company which cares about AI: Arcen Games -- and their flagship game AI War. If you want to see amazing, multi-layered emergent AI in action, where every ship in a fleet of thousands has its own tiny AI, and which responds very well to your actions, try it. Warning: the game is addictive.

 

 

AI War is a one-of-a-kind strategy game that plays like an RTS but feels like a 4X. With tower defense and grand strategy bits, too. You'll be wanting the demo to really know what we mean, there's nothing else on the market remotely like this (as many reviewers have glowingly pointed out).

More specifically, this is a game that you can either play solo, or in 2-8 player co-op. You always play against a pair of AIs, and you can configure an enormous amount of things about the experience. The AI is very excellent, and also an entirely unique concept. The longer you play, the more the AI will impress you, which is kind of backwards from most strategy games, right? We named this "AI War" for a reason.

 

I caaaaan't waaaaait to delve into Eternity's AI!

  • Like 2

The Seven Blunders/Roots of Violence: Wealth without work. Pleasure without conscience. Knowledge without character. Commerce without morality. Science without humanity. Worship without sacrifice. Politics without principle. (Mohandas Karamchand Gandhi)

 

Let's Play the Pools Saga (SSI Gold Box Classics)

Pillows of Enamored Warfare -- The Zen of Nodding

 

 

Posted

I've said it before, but I think the best way to do this is to establish a pattern, then introduce a sprinkling of random "breaks" in that pattern.

 

If 4 out of 5 times that foe always strikes out after the closest opponent, for example, but that 5th time, he simply doesn't, the player's going to suddenly become very aware of the fact that that tactic isn't ALWAYS going to work, and be much more prepared to react and adapt to whatever it is the foe is doing at the time, rather than just figuring out patterns and countering those same patterns every time.

 

Completely agree -- but then you still have to implement a clever AI for the 1 out 5 combats where it is used.  In any case, the advanced AI that I'm talking about wouldn't be suitable for all opponents:  animals, for example, would likely target the nearest enemy.

 

 

<snip>

Like I said, this is a hard problem to solve.

 

It's hard because the goals you set are too high-level and too long-term to calculate and execute in one fell swoop. You need to break the down the goals into more frequently executed smaller ones that, when chained together, have a much better chance of yielding the expected results. Breaking the the problem into smaller parts is a powerful and oft-used tool in  mathematics.

 

-- Getting from my position to an enemy position shouldn't be a singular action. Every few steps, there should be a re-evaluation, and if things changed (the player moved a blocking unit into my path, I got Hobbled, reducing my movement speed, etc.), my actions should respond to that. Don't write AI based on getting to a certain enemy as a singular action, write AI for taking a few steps for a desired goal, whether closing in on an enemy, or flanking one, or trying to avoid the melee control zone of enemy lineholders, and so on. That's manageable, the response to player tactic will come sooner, and it'll make the player's job a lot harder.

 

Agreed, and I even outright stated in my first post on this topic that the path needs to be re-evaluated frequently.  Note that frequent recalculation has its own set of problems -- see my example of forcing the enemy into continuous retreat. 

 

But... 

 

You still have to work out a potential path to ensure that you will reach the desired target via an indirect way for each iteration.  Now, there might still be ways to break the problem down further:  for example, setting a goal of "flank the enemy squishies at a range of 50 yards" where you could use a "shortest path to target" algorithm and achieve the same end-state.  In effect, this algorithm would would be establishing waypoints, which is how players work around pathfinding limitations.  But this doesn't really making the problem any simpler / more tractable (both a worthy and a necessary goal, as you pointed out -- emergent AI is almost always superior to monolithic AI), as the hard part is figuring out where the waypoints should go, and you still have to do that.

 

FYI:  I actually own, but haven't really played much, AI Wars -- RTS games just aren't my cup of tea any more. :(

Posted

I am pretty sure this is a no-issue in PoE, since I don't think there will be attacks of opportunity anymore (will there)

 

Just as it wasn't an issue in Expedition: Conquistator if you choose better AI; it only does this if you lowered it's AI to allow it to do stupid things (part of the easier difficulty settings). Try turning it (Enemy AI) to max next time, and they'll never do that unless they really *really* need to. ;)

^

 

 

I agree that that is such a stupid idiotic pathetic garbage hateful retarded scumbag evil satanic nazi like term ever created. At least top 5.

 

TSLRCM Official Forum || TSLRCM Moddb || My other KOTOR2 mods || TSLRCM (English version) on Steam || [M4-78EP on Steam

Formerly known as BattleWookiee/BattleCookiee

Posted

Eh, serves me right for not really taking too much note of the combat mechanics, and expecting somewhat similar to IE in this regard.

My bad.

^

 

 

I agree that that is such a stupid idiotic pathetic garbage hateful retarded scumbag evil satanic nazi like term ever created. At least top 5.

 

TSLRCM Official Forum || TSLRCM Moddb || My other KOTOR2 mods || TSLRCM (English version) on Steam || [M4-78EP on Steam

Formerly known as BattleWookiee/BattleCookiee

Posted (edited)

You still have to work out a potential path to ensure that you will reach the desired target via an indirect way for each iteration.

I didn't want to make an impression that emergent AI is some magic wand which insta-solves all of the deeper problems. Working out the AI for the command layer(s) for a new combat system takes a lot of iteration, you can't just get it right for the first time, no matter how experienced you are.

 

You're also absolutely right that lowering the frequency of position evaluation has its own set of problems. I've seen first hand that if the tick interval is too low, then the AI can get into an oscillation, where it paces back and forth between starting two things, and finishing neither. I spent about 4 years creating AI brains for Kohan games, one of my favorite games of all time, and there was a valuable for setting the tick interval: pure experimentation showed that the best value is about 2-3 seconds. Anything lower than that gives you worse results. I don't mean to say that this is a universal constant for every game, just to show that there is some optimal value.

 

But this is where having multiple layers help you: if the system is configured properly, the stability of the high-level goals should "even out" the anomalies of the lower level "selfish" decision-making. If level 2 still gives you "jitters" like constant retreating or running back and forth, then it's time create level 3 with even higher-level goals. It's not an infinite loop: at some finite level, you should be able to match the human decision-making process.

 

Honestly, though, I don't want to TALK about this any more. Those years I spent with Kohan are long gone by, and I'm itching to DO something like that again. Where are my scripts, Dr. Watson? My mind is ready! :p When my AIs have managed to beat me on Hard difficulty, having a bit of economic advantage (normal difficulty was hopeless for them), I was so happy! I want to that experience that again. :)

 

Also, juanval, a big thank you for opening this topic! I really enjoyed this conversation!

Edited by Endrosz

The Seven Blunders/Roots of Violence: Wealth without work. Pleasure without conscience. Knowledge without character. Commerce without morality. Science without humanity. Worship without sacrifice. Politics without principle. (Mohandas Karamchand Gandhi)

 

Let's Play the Pools Saga (SSI Gold Box Classics)

Pillows of Enamored Warfare -- The Zen of Nodding

 

 

Posted

So impressive examples by all of you :dancing: 

 

I hope the devs can pull it off !!!!

 

 

I'm a little worried what with combat mechanisms I've read so far.. Do you think default settings will be easy ?

Matilda is a Natlan woman born and raised in Old Vailia. She managed to earn status as a mercenary for being a professional who gets the job done, more so when the job involves putting her excellent fighting abilities to good use.

Posted

Always shoot the wizards first.

 

tumblr_lx3e89X1Pu1qjz023o1_500.png

No, healer first, then wizard. 

"You know, there's more to being an evil despot than getting cake whenever you want it"

 

"If that's what you think, you're DOING IT WRONG."

Posted

Always shoot the wizards first.

 

tumblr_lx3e89X1Pu1qjz023o1_500.png

 

I do not know if an image with Herr Starr's head in full view is entirely appropriate for this forum ;)

Quote
“Political philosophers have often pointed out that in wartime, the citizen, the male citizen at least, loses one of his most basic rights, his right to life; and this has been true ever since the French Revolution and the invention of conscription, now an almost universally accepted principle. But these same philosophers have rarely noted that the citizen in question simultaneously loses another right, one just as basic and perhaps even more vital for his conception of himself as a civilized human being: the right not to kill.”
 
-Jonathan Littell <<Les Bienveillantes>>
Quote

"The chancellor, the late chancellor, was only partly correct. He was obsolete. But so is the State, the entity he worshipped. Any state, entity, or ideology becomes obsolete when it stockpiles the wrong weapons: when it captures territories, but not minds; when it enslaves millions, but convinces nobody. When it is naked, yet puts on armor and calls it faith, while in the Eyes of God it has no faith at all. Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of Man...that state is obsolete."

-Rod Serling

 

Posted

I just started IWD 2. Haven't played it before as a matter of fact. Dat pathfinding...

 

If there's just one change to the AI that I'd kill for, it's this:

 

Don't walk into a harmful area effect unless explicitly ordered to do so.

 

It's really bloody annoying to keep the little dudes from wandering straight into a Web or similar when they get over-excited after ganking somebody. Idiots.

 

Other than that, it's good clean IE engine fun. I hope I run out of goblins soon, though, it's getting a bit repetitive. Still early going, just hit level 6, but I am getting kind of tired of fighting orcs and goblins in corridors.

I have a project. It's a tabletop RPG. It's free. It's a work in progress. Find it here: www.brikoleur.com

Posted

Completely agree -- but then you still have to implement a clever AI for the 1 out 5 combats where it is used.  In any case, the advanced AI that I'm talking about wouldn't be suitable for all opponents:  animals, for example, would likely target the nearest enemy.

Oh, yeah! Agreed. I just saw a lot of people (in this discussion and prior ones about AI) kind of trying to fix the problem of pattern-usurping by just further complicating the pattern. But, I think if you never have something that breaks the pattern "for no reason," it'll never really feel like it's making a decision. You have to simulate that second-guessing, or ulterior motives behind choices, and/or just-plain "this isn't the best calculated tactic, but I'm doing it 'cause I feel like it, and you can't calculate that" choices that humans make.

 

Even with animals, they'd be a lot simpler in their template, but you could still have some seemingly "random" fixations, etc, as an abstraction/general-simulation of just various states of the animal. Maybe one wolf is starving, and another is not. The game doesn't have to display "Ravenous Wolf" when you mouse-over the target. Thus, when that wolf "randomly" fixates on a specific target when we expect it to simply go for the nearest threat, we don't really have to be told why it did that.

 

Granted, yeah, even the random pattern breaks have to be well-thought-out. You can't just play action-roulette with the action tables in the AI code and randomly have them cast heal spells on themselves when they're at full health and such. But, it's definitely a factor that a lot of AI systems don't really take advantage of.

 

Even the most complex one eventually becomes much simpler if you can just figure out how it "thinks" and cut it off every time, with enough effort.

Should we not start with some Ipelagos, or at least some Greater Ipelagos, before tackling a named Arch Ipelago? 6_u

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...