Case Study: Space Viewport
Drag in viewport to pan
Drag on ‘radar’ to pan
Alt-drag in viewport to zoom
Click ‘follow’ to follow ship
left/right arrows to turn
ctrl to thrust
The Space Viewport shows how model/display separation makes panning/zooming of a 2D ‘camera’ possible, and also an object-oriented design for implementing it. This design can be built upon to make a variety of games or ‘edutainment.’
Object-Oriented Design for the Model
The model for this simple universe consists of variables that store all aspects of the universe as well as the algorithms that act on those variables. For example, the position of each star and sun as well as the algorithm that makes the ship move.
Organizing these variables and algorithms into a number of scripts that work together is an exercise in object-oriented design. The variables are the properties of objects, and the algorithms are written as methods of objects. (Make sure you’re familiar with the material in Object Oriented Fundamentals.)
Each type of element in the universe is programmed as a script (class). So there is one for suns, one for stars, and one for the ship. In addition, there is a script for the universe itself which holds all the elements.
property height — height of universe (in universe units)
property elements — contains all objects in universe
property ship — ship object
width = 250.0
height = 250.0
elements = 
repeat with i = 1 to 100
repeat with i = 1 to 7
ship = script(“ship”).new(me)
Use of Inheritance
When you take a look at the properties that the stars, suns, and ship need you notice that they all have some in common. For one, they all need to store their position within the universe, because each is a universe element. These common properties can be inherited from one ancestor script. Inheritance indicates an ‘is a’ relationship.
So in this demo the star, sun, and ship scripts all have as an ancestor the universeElement script. This saves having to duplicate common properties and methods in each of these scripts, which becomes more significant as more scripts and functionality are added to the model.
Public and Private Methods
The methods of the objects are divided into ‘public’ and ‘private’ sections. Public methods are meant to be called from outside the object, while private methods are only called from within. In Lingo this makes it easier to tell how an object is to be used. In formal OO languages like Java these declarations are an integral part of the language.
The model for this simple universe consists of:
•Dimensions of the 2D space
•Position and dimensions of each element in the universe
•Algorithm to animate ship
•Algorithm to animate suns
Not really much to it. The ship movement algorithm comes from Direction using Sine & Cosine: of Force and Acceleration. In this demo, however, the ship’s position, velocity, and acceleration values are relative to the model coordinate system rather than the screen.
The model coordinate system is not explicitly programmed; there isn’t a block of code that you can point to and say ‘this is the model coordinate system.’ But it is implied in that the properties of the elements in the universe are relative to the same coordinate system and handled as such. These include position, size, velocity, acceleration, etc.
It is also implied in the functions that map model coordinates to screen coordinates. These functions are part of the rendering process, so here I’ll just note that these functions are written so that the model coordinate system is similar to the stage coordinate system. That is, all the universe elements lie in the lower-right quadrant with the positive-y axis downward. This simplifies rendering functions.
In this demo, positions within the model coordinate system use the 3D vector data type even though only the x and y coordinates are used. This makes it easier to differentiate between model versus screen coordinates (which uses point data type) and also makes it easier to later add depth to the model if desired.
The bulk of the code in the demo deals with rendering, not the model. The scripts for the universe elements contain code for both model and rendering. The rendering code consists of anything relating to the sprite that represents the model element.
The render script renders the model by setting the location and other properties of the sprites that represent the elements in the model.
The ‘camera’ in this demo consists of a location in the model (camVec) and a zoom level (zoomLevel). These values are used in mapping universe coordinates to screen coordinates, with the camera location centered in the view area.
The camera is constrained in a few ways. If ‘follow’ is turned on, the camera follows the ship by simply setting the camera location equal to the ship location. Imagine how much more complicated this would be to do if a model/display technique was not used!
A minimum zoom level keeps the camera from zooming out to where the universe is smaller than the viewport. And the camera’s location is kept a certain margin from the edge of the universe so that space outside the universe is not seen within the viewport. The size of this margin changes with the zoomLevel.
Mapping between coordinate systems
The universe coordinate system is mapped to both the viewport and the radar. In addition, the viewport is mapped to the universe in order to get the universe dimensions for the ‘view area’ box in the radar. And the radar is mapped to the universe in order to position the camera based on a click on the radar.
The mapping functions consist of ‘shifting and scaling’. The scaling is done by multiplying by the ratio of the dimensions of one area to another. The shifting is done by adding the difference between the positions of the upper left corner of each area. For the universe that is (0,0), for the viewport and radar it is the stage coordinates of the upper left corner of each.
The methods that contain the mapping functions [uniToView() uniToRadar() viewToUni() radarToUni()] are only called from within the render script and so may be considered ‘private’ methods. However, as complexity is added it is conceivable that other scripts may need access to these methods and so I made them public.
Rendering to the Viewport
Rendering is done on two levels. First, the render script sets properties common to all universe elements. Then it calls each element’s own render() method:
repeat with element in universe.elements
element.sp.loc = uniToView(element.posVec)
element.sp.height = element.height * zoomLevel
element.sp.width = element.width * zoomLevel
renderToView() method of “render” script
This way each type of element can make some custom modifications to its sprite, such as the ship and suns setting rotation.
Notice that the sp property and render() method of each element in universe.elements is accessed even though the element variable can be a variety of object types. The element variable points to ship, star, and sun objects in turn. It works because these objects have common properties and methods by inheriting them from the universeElement script (class). This is known in OO jargon as ‘polymorphism’.
This is the reason there is an empty render() method defined in the universeElement script. Scripts that inherit from universeElement can override this method, as do sun and ship scripts. But they don’t need to, as shown by the star script. In either case the call to the render() method of the object is valid.
Rendering to the Radar
This consists of setting the location of the radar ship dot, by making use of the uniToRadar() mapping method.
Also on the radar is the ‘view area’ box, but placing and sizing this box is not technically a part of rendering since the box doesn’t represent anything in the universe model. Instead, the box shows the relationship between the Viewport and the universe as specified by the camera.
To set the box, first the Viewport rectangle is mapped to universe coordinates and then those universe coordinates are mapped to the radar:
spRadarShip.loc = uniToRadar(universe.ship.posVec)
topLeft = viewToUni(point(viewRect.left, viewRect.top))
botRight = viewToUni(point(viewRect.right,
spRadarBox.rect = rect(uniToRadar(topLeft),
renderToRadar() method of “render” script
Single Starfield Image
Another way to program the starfield, which is less processor intensive, is to use one image for the whole starfield rather than using a sprite for each star. Add a “starfield” element to the universe model with a script like this:
on new(me, uniW, uniH)
me.ancestor = script(“universeElement”).new()
me.posVec = vector(uniW/2, uniH/2, 0)
me.height = uniH
me.width = uniW
me.sp.member = “starFieldMember”
Tree Data Structure for the Model
For simplicity in this demo, the universe object uses a list to store the elements in the universe. But what if, for example, you wanted to have moons circling planets which circle suns? Or you have several elements you want to animate as a group? Using a tree to store the universe elements makes this type of animation much easier.
This is the technique used for 3D worlds, and it can be used very similarly for 2D. See 3D World Hierarchy for more on using trees in this way. A few of the primary differences between using a list and a tree are:
•instead of iterating through the list, you’ll traverse the tree recursively
•each element’s position is relative to its parent, so its absolute position is found
I’d stick with using the transform data type as in the 3D quad demos. In 2D there will be just a transform to specify position, dropping the vectors used for the quad corners. A bonus is you get rotation and scale built in to the transform type. Take sprite rotation from the transform rotation vector z value (assuming you are working in 2D xy plane), and scale width/height from transform scale vector xy values.
For an implementation see Space Viewport 2.0.
A Rendering Alternative
In this demo, each universe element has a dedicated sprite. When the element is first created it puppets the sprite, sets a few of its properties, and uses it throughout the program.
A different way is to puppet the sprites and set the sprite properties for all the elements each time the universe is rendered, first releasing the sprites used for the last render. This takes more computation but is a cleaner render technique and is a better solution in some cases.
For example, say you had 10 universe models to view alternately with each using 150 sprites to render. This would be easy to do, just instantiate 10 universe objects and set the renderObj.universe property to the one you wanted to view. However, if the sprites are dedicated as in this demo you’d need 1500 sprites puppeted simultaneously. So this alternate way of rendering would be the better solution in this case.
- Adobe CS5.5 Design Standard
- Consigli per l’acquisto della pietra preziosa JewelryA