Polygon Shape Filling

This entry describes a little spinoff (spun-in?) project that happened during game development.

For all kinds of reasons, I needed a utility bitmap class. Just a place to load and save bitmaps and move them around. For example, on their way to become texture maps or font.

Easy enough, for my purposes 8 bits each of RGBA is fine.

class OmImageRgba8 : OmObjectBase
{
public:
    int width = 0;
    int height = 0;
    uint32_t *pixels = 0; // malloc'd, disposed with instance
    OmImageRgba8(int width, int height);
   ~OmImageRgba8();
    void setPixel(int x, int y, uint32_t pixel);
    uint32_t getPixel(int x, int y);

    bool writeFile(std::string filePath);
    static OmImageRgba8 *readFile(std::string filePath);
}

So about ReadFile and WriteFile. I do love writing everything myself… up to a point. I’m often put off my the complexity of using Other People’s Code, when it becomes mired in a tangle of still more Library Dependencies. Makes it hard to build my project.

Happily, I found “plain old code” libraries for PNG and JPG. By “plain old code” I mean, it’s a small number of source code files that just work on a normal compiler.

LODEPNG by Lode Vandevenne for reading and writing PNG files.

JPEG-COMPRESSOR by Rich Geldreich, for reading and writing JPG files. 

These were both very easy to integrate. The ::readFile() and ::writeFile() methods on OmImageRgba8 simply look at the file extension to choose which, and only work if it’s .jpg or .png.

More features crept in over time. Some images arrive Y-up, others Y-down, so ::flipY() was added.

For debugging, it’s handy to imprint text information onto a bitmap, so ::drawF(uint32_t color, const char *fmt, …) was added. It uses a simple 8×8 pixel font. I came across this handy font some years ago, and must share its origin. The link is http://overcode.yak.net/12. It was a small image which I decomposed into static C data.

char font8x8[] = 
{
…
0x08,0x49,0x2a,0x1c,0x2a,0x49,0x08,0x00,   // 0x2a '*'
//   . . . . @ . . . 
//   . @ . . @ . . @ 
//   . . @ . @ . @ . 
//   . . . @ @ @ . . 
//   . . @ . @ . @ . 
//   . @ . . @ . . @ 
//   . . . . @ . . . 
//   . . . . . . . . 
0x08,0x08,0x08,0x7f,0x08,0x08,0x08,0x00,   // 0x2b '+'
//   . . . . @ . . . 
//   . . . . @ . . . 
//   . . . . @ . . . 
//   . @ @ @ @ @ @ @ 
//   . . . . @ . . . 
//   . . . . @ . . . 
//   . . . . @ . . . 
//   . . . . . . . . 
…
};

The font was designed by John Hall, and on the website above, he also documents some other code and tech work, and some aeronautical items, and his descent and demise due to skin cancer. So I always think a few kind words of thanks to this unknown and lost fellow coder and this one part of his legacy that I use. Thanks John.

And somewhere along the way I wanted to do some generative art, so added a basic antialiased Rectangle Fill method. Handles the edges and corners special for partial coverage, and fills the broad interior. Fun enough.

(generative art)

But that slippery slope let up to September 2022 when I thought, All the cool kids have implemented a scanline polygon fill, it’s time for me.

Filling polygons is kind-of a big bother, keeping track of edge lists and numbers and stuff. Oh well! Computers and programmers love that kind of thing. Here’s the basic approach.

I’ll define polygon as one or more closed loops of straight edges. The polygon is defined by the vertices, and each vertex is shared by two edges.

For a simple polygon fill, we fill each pixel if and only if the center of the pixel is within the polygon. 

(A polygon with two paths)

(We’ll discuss partial coverage later, I promise.)

Essentially, we want to ask each and every pixel, “Is the center of this pixel within the polygon.”

(Scanlines and centerpoints)

For each row (or “scanline”) we determine which edges encompass the something-point-5 part of the row. There will always be an even number. Then find the x-position of each of these. Then we fill in pairs, only those pixels within an x-pairs span.

Some simple optimizations include:

• presorting all the edges by lower-y value, so you just look at the next one to see if its in Y range

• using an x-step value for each active edge, as we step down each scanline, because we do render the scanlines sequentially

• discard horizontal edges, or any edge that doesn’t traverse a Y-point-five boundary

At first it did seem like a bother, but it all became easy to implement.

What about antialiasing? The output looks pretty blocky without it. You can always render bigger and scale down, perfectly respectable solution.

One easy thing I was tempted to try was, incorporate the x-position of each edge for partial coverage.

[A simplistic and flawed anti-aliasing approach]

Alas this would only help the side edges, and not the top edges, and just look funny

But… look closely at these illustrations. Go ahead, zoom in. They were all drawn using OmImageRgba8 and OmPolygon filling. And they’re antialiased very nicely! Next post will demonstrate a nice antialiasing technique that builds on this edge-sorting, and doesn’t involve downscaling.

Collisions, Yet Again, Part 3.

This post is Part 3 of 3 about Metareal’s collision subsystem.

In Part 1 and Part 2, we described how boxes can fly through spaces, hit each other, push each other, and stop when they hit walls.

In this final Part we’ll look at a few glossed-over details… And then the rather glamorous feature which allows several boxes to be treated together as a composite volume.

The Fabric Of Space

I just read in the Wikipedia that empty space in our universe contains about a trillionth of an erg per cubic centimeter. That works out a millions of an erg per cubic meter. “Good heavens!”

But that’s not important. In the Meteral world, we need to know where things are. We consider each “thing” as a box-shape (or a collection of box shapes). This is described as a position (X,Y,Z), and a radius in each axis (Rx, Ry, Rz).

To detect collisions, both for physical movement and also for game triggers, we need to ask: “Which boxes, if any, are we touching?” One way would be to check every single box in the world.

CollisionPart3NoBins

Even for Metareal’s modest universe of about ten thousand anticipated objects, this would be quite slow. Time is framerate and all that.

There are a number of well-known techniques for optimizing this (K-dimensional trees & oct trees are two of them) but I chose a simple one: fixed-size bins. Since objects in Metareal are all in a bounded area about 1200 units across, and somewhat evenly distributed, this works well.

CollisionPart3Bins

Each bin is 16 units across. Why 16? Trial and error, and most objects are less than 16 units big. To check for a collision, we just look in the bin where our collider is (its center point), as well as the 26 bins adjacent to it. Ah, you say, But what about objects that are bigger than 16 units? Well… I put them in all the bins that they’d hit. It’s a little bit inelegant for large objects, but seems to work.

When an object moves, we see if its center has changed bins. If so, we remove it from all the bins it was in (usually just 1, but more for a big object) and add it to all the bins it will be in. Again, somewhat dorky for large objects, but it works for now.

Many bins are empty. To locate a bin, we index into a std::map<long, Bin> with a 48 bit hash derived from 16 bits each for X, Y, and Z bin-position. (With a 1200 unit world and 16 unit bins, could fit into 8 bits per axis and an int map index… but it’s fine.) The way std::map works, bins pop into existence the first time it is accessed.

Ball Vs Wall

In Part 2, we described how collisions know when to stop: When the thing moving or something it’s pushing hits a wall. This feature is actually very slightly deeper. Every object has an Inertia value, which represents how hard it is to push. And every movement has a Force value, which is how big an object it can push. If the Force is bigger than the Inertia, then the thing can be pushed. Else, it is stopped. In a chain of several objects moving, the original Force is transferred through all the pushed things.

CollisionPart3Chain

A wall is a thing with a relatively high Inertia. I’ve often had bugs where a Metareal room accidentally had a low Inertia, and would move around. It is disorienting and then the hallways no longer line up. 🙂

Complex Collision Volumes

I’ve found you can get quite far with a world whose only colliders are box-shaped. Especially in this game, where the physics are inherently… boxey! Still, it’s valuable to have other shapes, for certain kinds of puzzles where you push or manipulate pieces with corners or fitting areas and such.

The implementation builds the technique described in Part 2, and is almost disappointingly simple. To move a compound collider, move each of its component boxes as a single box, and the resulting possible movement is whichever of them can move the least. When it pushes other objects, they each check how for their components can move.

CollisionPart3Compound

Illustrated above is a simple volume pushing a compound volume.

Shown below is the movement of several linked compound objects. It alternates between the game-render view, and a debug view showing the component volumes.

CollisionPart3linksLoop

Now, because I sometimes let my code evolve a bit too long before refactoring, the actual data structure is that each CollisionVolume object includes a std::vector of all its fused CollisionVolumes. And they all point to each other. Which is maybe an odd way to do it and slightly wasteful of space, but it works and that is that for now!

Collisions, Yet Again, Part 2.

This post is Part 2 of 3 about Metareal’s collision subsystem.

In Part 1, we described the basic, single-axis model of movement, pushing, and blocking. Here we’ll look at more complex movements and collisions, and some special cases.


Diagonal Moves

Objects in the Metareal World can move along diagonals. For collision testing, we do this by moving first in X, then Y, and then Z. This can lead to some ambiguities.

Collisions11

But in practice, most movement is done is smaller steps. This could affect “bullets”, but that doesn’t come up much in Metareal. More importantly, objects can’t escape enclosures such as rooms, regardless of the axis ordering.

Collisions12

Transporting Into A Bulkhead

As was mentioned in Star Trek, one of the failure modes for teleportation is materializing inside a bulkhead. Sometimes objects move instantaneously in Metareal. Some doors need to just slam shut, where by “slam” I mean “not slam but rather appear instantly.” But it might do so on top another object, maybe even you! In this case, we allow the object to move out of intersection, and once freed it cannot move back into intersection.

This behavior arises naturally from the process of checking leading edge movements. The leading edge of an object already intersecting an obstacle won’t freshly cross the obstacle face.

Collisions13

The Roundoff Problem

Problem: Due to arithmetic errors, an object that is just barely touching an obstacle sometimes slides right through it.

Consider the following move:

Collisions14

Object moves to the right, stops at the wall. So far, so good. But all these coordinates are floating point. Adding the difference between wall and object to the object’s center, and recomputing the extents introduced a tiny arithmetic error, it’s actually penetrating the wall jussssssst a leedle.

Collisions15

Next time it moves, it can proceed further to the right, satisfying the “ok to move out of collision” rule. But we want it to hit the wall and stop.

Solution: Add a “fudge factor” to the collision distance. The entire Metareal World is about 1200 units across; I’ve found that adding 0.001 to the collision distance brings all roundoff errors back into the positive, not-already-intersecting, realm.


And Part 3 will conclude this series, with a discussion of compound colliders. It’s all made of boxes.

Collisions, Yet Again, Part 1.

This post is Part 1 of 3 about Metareal’s collision subsystem.

Building my own engine is deeply satisfying. But it is iterative, to be sure. This will be the third major overhaul of the collision system. Might be the last, we’ll see. Where were we…

Move-Then-Fix: A Failed Approach

Back here is a description of a first pass at collision management. Basically, it does these steps:

  • Move the thing where the thing thinks it’s going
  • Look at any intersections
  • Try to do some more movement to get out of intersections
  • Some secret sauce to move the least, but also not get snagged on seams, and stuff like that

It worked just enough to fool me. But had problems. Here’s one. When moving into a narrow space, but more “into” than “across”:

Collisions6

The minimal intersection-fix is also illegal, and would lead to a metastable vibration:

Collisions7

In some cases it’d just get stuck in walls. Maybe with further analysis it could undo more of the move, but it’s no longer so simple as “Move and then fix.” It also didn’t support “pushing” in any obvious way. So came up with a whole new approach.

Recursive Intersection Testing

When an object A moves distance d in the X-axis, one of four things will happen:

  1. It will move the entire distance without bumping into anything
  2. It will bump into something it cannot push, and stop
  3. It will bump into object B that it can push, and might continue pushing it

In case 3, then object B must be tested for however far A might push it. This is implemented recursively. And it is performed independently for each axis of motion.

Here’s an example. A moves 4 units to the right, so it’s going to bump into B and C. Also, B will hit a wall.

Collisions8

The analysis descends like so:

  • A wants to move 4 units right. How far can you move, A?
  • A sees intersections first with C (1 unit away) and then B (2 units away)
  • Recurse. C wants to move 3 units right. How far can you move, C?
  • C has no intersections within 3 units. I can move all 3 units!
  • Recurse. B wants to move 2 units right. How far can you move, B?
  • B hits a wall in 1 unit. I can move 1 unit right.
  • We know that A can move 3 units to the right (distance to B is 2, plus how far B can move).

After this analysis, we simply move A 3 units right, and each of the previously analyzed intersections moves by what’s left.

And this is done for delta X, delta Y, and delta Z. Doing these axes separately completely eliminates any possibility of entering a shaft too narrow for the object.

The Bullet Problem

Problem: If an object is moving quickly, its proposed position could be on the other side of a wall, with no intersection.

Solution: Move only the leading edge. That is, check for intersections of the swept volume.

Collisions10

The actual test is simply: Does the leading edge start out before the closest edge of the obstacle, and end after the obstacle. In the implementation there’s plenty of cruft relating to positive- or negative-motion, to check which edges are moved, which are compared against, and so on.


That’s enough for this post. Part 2 will discuss diagonal movement and roundoff errors. Part 3 will describe collision objects more complex than “A Box”.

IndieMegabooth GDC Showcase Postmortem

After a bit over a year of heads-down coding, I finally had the first public showing of Metareal. It was over the week of March 14th through 18th, 2016, at the Game Developer Conference in San Francisco, at the Indie Megabooth Showcase. Here’s some notes for my own future reference, and the otherwise curious.

This post has become a bit longer than I prefer… but I’m going to leave it that way. Sorry.

The Selection

I’ve been working on my game for a while, and had decided that, one way or another, in 2016 would start to show it around. Submitted it for consideration to Indie Megabooth for Pax East.

Made a custom build and everything with some friendly personalized in-game notes to the IMB reviewers.

ImbGdcPostmortem1
(Looking back at this build, I cannot believe how cheesy it is!)

Figured it would be fun to hit Boston, April was a ways off, and in the “Minibooth” would just have it up part time, a few hours here and there for play testing and light publicity.

Waited for what felt like a terribly long time (but actually wasn’t), finally in January got the Best Email Ever from IMB:

We’d like your game to be one of fifteen at the IMB showcase at GDC, no charge, and oh you get an all access pass too.

Could not believe it!

The criteria for this selection, as I came to understand it, was games of interest that weren’t really quite ready for a consumer-facing event, that a developer-focused event was more suitable for. And I think “games of interest” was a curatorial choice on their part, for a variety of genres, developer diversity, and so forth.

Panic in Santa Cruz

On the plus side, this was a lot more logistically convenient for me. San Francisco is a 90 minute drive. I have friends who live near there. Boston is a bit further.

On the panic side, GDC was over a month earlier than Pax. Some various other projects were instantly backburnered. My Atari Tempest Machine restoration. My dining room table refinishing. My invitation to show some video art at a one-night event at our local art museum: CANCELED TIL AFTER GDC!

A little panic is good though.

Getting Stuff Done

I’m pretty good at “doing things”. Yet, for the first time in my life, I felt a profound bit of impostor syndrome. I’d written to IMB with a pack of lies and aspirations, and attached a barely working bit of buggy code, and they were calling me on my bluff. Ah, my failure, it would be hilarious and spectacular, no?

But back to reality.

The demo submitted to GDC didn’t even have sound, or a start screen. But now, a concrete goal was clear. Fortunately, I’d spent the month before The Letter arrived building up my sound engine and patch editor. Time to spring into action.

ImbGdcPostmortem3
Tool for editing audio patches.

All was as it should be. Coding, designing, and jumping up from time to time saying, “Yikes, I am going to GDC.”

We had a nice little email list for the IMB GDC 2016 cohort, and later set up a Slack team chat. This was great. Lots of advice about special demo features, data logging, &c. A couple of us were on chat quite regularly. Had especially useful communication with Austin Schaeffer (presenting Altered State), Radu Muresan (presenting Semispheres), and Ryan Burrell (Indie Megabooth all-around awesome guy). Radu had presented at a number of events, and had great guidance.

Two Months of Intense Focus Later…

I had a pretty presentable demo! It had some sound effects! It had the special Beginning of the Game where you find out that xxxx xx xxx xxxxxxx xxxx xx xxxxx. There were 12 rooms with 6 individual and challenging puzzle-elements that had to be completed! It was brightly colored!

ImbGdcPostmortem2
Looks a bit tighter than the original submission. Sound effects, too.

I got cocky and started my Steam Greenlight just before GDC. That may have been a miscalculation, the “Gameplay Video” was maybe not as compelling as it needed to be just yet. It’s currently stalled out around 270 yes votes. On the other hand, at least 100 of those were directly from GDC exposure. (Not worried; I’ll do PR and Follower Blasts at appropriate future milestones.)

My First Day At The Office

So, mathematically speaking, I’m older than the average game developer. I’ve been writing code for 40 years, first professional sale 35 years ago. But this is my third ever game development effort, and the other two were decades ago. And GDC… is the first time I’ve been around other professional game developers.

Truly, 2016 March 13th, Sunday, was my first day at the office! I finally got to meet “my people”.

Arrived at Moscone, found the MegaBooth, happily met our IMB hosts (all wonderful) and Austin and Radu for the first time, and installed my game on the Demo Laptop. Aaaaaand it totally crashed. Head smack, and so forth. I’d made the standard rookie error of building against dynamically linked debug libraries. Shortest fix: I installed Visual Studio, completely, on the Demo Laptop. Resolved.

Five Days of Demos

ImbGdcPostmortem4

People played it!

  • The first person to approach the game played for 8 seconds, and walked away. Oh no!
  • The second person played for 10 minutes and got well into it, and we got to chat. Whew!
  • Not all games are for all people. (A few lucky ones are. Not mine.)
  • People have different play styles, ranging from extremely rapid and random gestures up to finely controlled experiments. On the whole, those latter players got more out of Metareal.
  • IMB got us invited to a lovely catered dinner hosted by Valve, about the future of indie games and why Steam is a great platform
  • Talking to people became pretty fun after not too long. Like most trade shows, people are eager to talk about what they’re working on, find out about yours. Unlike most trade shows, people are honestly proud and excited in a playful way about their work. Also, authentically giving, and receptive to, ideas and subtleties and analyses.
  • I should have accepted IMB’s offer of two event passes and recruited a helper. Every other postmortem advises this, which I ignored.
  • Just the same, I went to a few talks, and checked out some other games. Finally got to meet William Chyr and actually play Manifold Garden (awesome!). Robin Baumgarten’s Line Wobbler led-based installation game was fun & notable. And I had an affinity for Le Chant du cygne’s goggle-assisted abstract puzzler Palimpsest, of course.
  • Met many other amazing developers, too many to list… But two I simply must mention for their sheer friendliness and cognitive energy are Ed Fries, notable for a recent Atari 2600 game (no, seriously), and Droqen, the author of Starseed Pilgrim.

Costs

My costs were minimal. IMB charged nothing, and gave me a $2000 all-access pass. Thank you, Indie MegaBooth!

I drove to Berkeley, parked my car at a friend’s house, and stayed at another friend’s house for the week. Each morning I took BART to Moscone. I brought a couple of sandwiches and bananas each day. The only food I got on site was coffee.

Glitches and Learnings

  • As mentioned above, should have brought a helper to man the station.
  • Discovered I need a Virgin Operating System for proper build testing. One without dev tools!
  • On IMB’s part, there was some ambiguity about the specs of the demo machines we’d have available. Was worried about graphics performance (especially since I have home-brew engine) but it all worked out fine.
  • My impostor syndrome subsided completely.

In Conclusion

This was the best week so far of my new, late-stage career as a game developer. Thank you Indie MegaBooth! This was an amazing opportunity and experience.

Physics in the Metareal World

Having a rock-solid world is critical to making Metareal seem, well, real. It doesn’t have to be like our world, but it has to be solid and consistent. You need to be able to experience, experiment, and extrapolate the behavior of the world. At least for this game.

Earlier blog posts here and here discuss Metareal physics a little bit, in the context of collision management.

Here, I will describe some of the rules of Physics in the Metareal World.

The Rules

• Objects Stay In Place. There is no gravity in the Metareal World. An object resting in space simply stays there.

• Objects Are Composed Of Boxes. Regardless of how they look, in terms of physics all objects are composed of 1 or more boxes.

Physics1

• Objects Can Self-Propel. Some objects can move, simply because they choose to move.

• Objects Can Push. When an object moves into another object, it will either succeed in pushing it, or stop.

• Force and Inertia. An object at rest has some intrinsic inertia. A moving object must be moving with greater force to push that object. This is implemented with a number. A force of 5 can push an inertia of 4 or lower. The rooms are made with walls of very high inertia.

• Objects Have No Momentum. When an object stops propelling itself, or is no longer pushed by another object, it stops.

• Objects Are Frictionless. When an object pushes or blocks another, force is only applied perpendicularly to the abutting faces. Thus, you can only push a cube along one axis at a time.

Physics2

• Size Matters Not. Each object has a certain inertia or pushing-force simply because I say it does.

Flatland and Spaceland

This is not at all how objects in the real world behave. And yet, it is very easy to understand and reason about. Why should this be? The human mind is sometimes effortlessly flexible.

But perhaps one familiar reference point helps. If we place objects on a tabletop, and push them around in two dimensions, this is exactly how they behave. It seems likely that objects in the Metareal World are resting on a four dimensional tabletop, and it is gravity and friction perpendicular to this world which gives rise to its entirely predictable behavior.

Math

Metareal has some objects that just stay in one place and rotate. They look nice.

Came across a funny little bug this morning. I’d let Metareal run all night, and the rotating objects were all moving a bit chunkily, not turning smoothly like they should.

Frame rate: AOK, 60 FPS still rock solid.
Player movement: AOK, nice and smooth.
Spinning objects: All of them kikk’ty kikk’ty kich. Broke!

Took a look at the rotating object code. Each game objects has a “body” and a “mind”, spinning is implemented in a mind…
The “rotate” parameter is a vec4 as {axisX, axisY, axisZ, degreesPerSecond}.

bool SpinMind::mindTick() override
{
  MOVEC4(rotate); // retrieve instance parameter "rotate" to local variable.
  this->tix++;
  float degrees = rotate[3] * tix / 60.0;
  rotate.v[3] = degrees;
  this->setBodyRotation(rotate, degrees);
        
  return true;
}

Can you spot the bug?

It’s roundoff error. As “tix” (ticks, ha ha) gets large, degrees gets large. Then degrees loses precision after running a while.

The solution? Accumulate degree (not ticks), and wrap at +/-360.

bool SpinMind::mindTick() override
{
  MOVEC4(rotate);
  this->degrees = fmod(this->degrees + rotate[3] / 60.0, 360.0);
  rotate.v[3] = this->degrees;
  this->setBodyRotation(rotate, degrees);
        
  return true;
}

This is all basic and obvious! But apparently I got something working just enough, and moved onward. That’s the way to make progress.

Windows Build!

2015-10-05

Last week I got a used, hand-me-down Dell Optiplex 780 from a very generous friend. It’s a 2009 3ghz dual core. Upgraded it with a NVidia GT730 DDR5 graphics card ($73 amazon), and Windows 8.1. Works great! (I can run NaissancE on it, also, finally, a PC for that game.)

Porting Metareal to it took a couple of days. Fortunately SDL2 and OpenGL are the foundations (as planned) so it mostly straightforward.

Some notes, about porting an SDL2 and OpenGL app from Mac OS X to Windows.

  • Be sure to add opengl32.lib to your project. Lots of web guides I read neglected this simple step!
  • The base SDK only supports OpenGL 1.1. No, seriously. On Mac you just link to gl3.h and get it all, Visual Studio’s c++ SDK just has OpenGL 1.1. Turns out on Windows you need to dynamically load up all the extensions and later version…
  • To get to higher versions of OpenGL, I used glew. If you do, don’t get caught up in their static vs dynamic libraries and 32 vs 64 bit… just add glew.c to your project. And use their OpenGL replacement header files.
  • Am I done with glew yet? No. There’s some sort of issue with glGenFrameBuffers and other framebuffer calls. Do glewExperimental = true; glewInit(); and, yeah, then it works.
  • SDL is a little finicky and fails silently. If you do something a little wrong, it will appear sometimes to work but char *s = (char *)glGetString(GL_VERSION); shows version 1.1.In my case, on Mac I did  SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 32); which, on Windows, silently throws you back to the OpenGL 1.1 software renderer. Do SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24); instead.
  • Another SDL2 thing… A full screen SDL2 window will steal the whole monitor’s focus even before it’s visible. Very confusing if you’re debug-stepping (and potentially forces you to reboot, if you have just one screen). Start with a non-full screen window.
  • But everything ran so slowly. Shockingly slowly, absurdly slowly. Add this preprocessor directive at the project level: _HAS_ITERATOR_DEBUGGING=0.
  • Also set the Debug Information Format to Program Database (/Zi), instead of the more elaborate Edit-and-continue enabled one.

But mainly, OMG! My Windows build works!! All scripted and automated, too.

Porting to Windows

Finally obtained a Windows computer. Microsoft now makes a fully featured edition of Visual Studio available for free! “Community Edition VS2015”, so, we’re off.

Project up and running pretty easily, a few unix/mac-isms/g++-isms removed. But… the unit tests pass (minus 11 that I temporarily disabled)… but…

On Mac:

     0.  ok       undisposed objects 0(0) == MeObjectBase::currentCount(0)
     0.  ok       undisposed mallocs 0(0) == MeObjectBase::mallocCount - MeObjectBase::freeCount(0)
test results: 11148 pass / 11148 assertions (0.603 seconds)
---------------------
 aok
---------------------

On Windows:

     0.  ok       undisposed objects 0(0) == MeObjectBase::currentCount(0)
     0.  ok       undisposed mallocs 0(0) == MeObjectBase::mallocCount - MeObjectBase::freeCount(0)
test results: 11137 pass / 11137 assertions (19.599 seconds)
---------------------
 aok
---------------------

Well, ok then. Next bit of work is clear!

Which Way Is Up?

Metareal lets you maneuver unhindered in three dimensional space. There is no gravity, no walking. (And no head-bobbing.) It’s perhaps like a drone moving through a space station.

But I wanted to keep the movement simple and easy. The familiar four-key-and-mouse maneuvering works pretty well, after all.

Problem 1: Nearest Ground Plane

The solution was to treat the mouse relative movements as side-to-side, and up-and-down, relative to the dominant ground plane. It turns out that figuring out which ground plane is best is pretty easy! The camera position and angle can be expressed as a conventional 4×4 column major matrix. From this, we look at the second column, which is where the camera’s Y-Up vector gets mapped to, and see which element has the greatest magnitude. It is that simple!

For example,

   /                                  \
   |   0.826   0.070  -0.559   48.700 |
   |   0.563  -0.103   0.820  -22.325 |
   |   0.000  -0.992  -0.125   -0.921 |
   |   0.000   0.000   0.000    1.000 |
   \                                  /

We can see for this matrix the bolded value -0.992 shows negative-Z as the dominant Up-vector. So, mouse left and right motions apply, inverted, to the camera’s global Z-rotation. (Arithmetically, we translate the matrix to origin, rotate around global Z, and translate the matrix back to the camera position.)

Problem 2: Don’t go diagonal!

OopsDiagonal

A problem with this approach is you can end up stuck in a “roll”, where, on the new ground plane, you’re not standing upright. Or, rather, you are standing upright, and if you spin left and right the floor stays beneath you, but your head is tilted. To fix this, we need to discover what is our local-Z-rotation relative to the current ground plane. To correct it, we want to rotate the camera along its line of sight, not changing what it is looking at.

RollQuestion RollQuestion3

A little vector arithmetic does the trick; we cast the camera’s projected X-axis to the ground plane along the projected Y-axis (up vector), and take the dot product between the X-axis and its line along the ground plane. That’s how much we need to roll. Easy! But how to apply the correction?

Problem 3: Correct The Up-Axis Naturally

I tried three approaches to correcting the up-axis, so that you’re usually looking at square walls and doors and things.

  • Manual correction: use some more keyboard keys to roll left and roll right. Terrible! I didn’t like it, anyway.
  • Automatic continuous correction: apply some radians-per-second maximum to correcting the roll. It’s ok, but adds a strange smoothness to your image movement sometimes…
  • Proportional correction: apply some correction but limited by how much you’re moving the camera yourself. This way the camera never moves except when you’re moving it.

I’m not sure which is best, but I documented all three in a short video.

Also, here’s some code.

    float getUpVectorAdjustment()
    {
        /// for now, just print the angle we suspect...
        MeVec3 cameraXVector = this->cameraMatrix.column(0);
        MeVec3 cameraYVector = this->cameraMatrix.column(1);
        
        float nope;
        int upAxis = axisOf(cameraYVector, &nope);
        
        float t = cameraXVector[upAxis] / cameraYVector[upAxis];
        MeVec3 planePoint = cameraXVector - t * cameraYVector;
        float thetaDot = cameraXVector.normalize().dot(planePoint.normalize());
        thetaDot = pinRangeF(thetaDot, -1, +1); // pin from tiny arithmetic drift...
        float theta = acosf(thetaDot);
        if(t > 0)
            theta = -theta;
        
        return theta;
    }