Vector vs. Bitmap
Art Decko said:
Wunwinglow , thanks for the advice and screenshots! Great information!
I'm still wondering about how you negotiate going to/from vector to bitmap.
Suppose you design something in Sketchup/Pepakura, then use a graphics editor to make the textures. What if SketchUp gives you something like a door that is 80.4 pixels wide? Do you just make the graphic as close as possible, then let SketchUp re-size it to fit the vector-mapped outline for the door? I'm thinking that could cause some weird aliasing or other unpredictable problems. Have you ever dealt with this situation?
Thanks again for the lowdown!
Art Decko,
I'll try and clear this up. In a nutshell, what you say above is absolutely correct. You create a texture as a bitmap, then "map" that texture to a 3D shape. It's not a question of "suppose" you do, this is the way you
must do it.
3D programs like Sketchup and Metasequoia represent an object(s) internally as a collection of points, lines and shapes. These points, lines and shapes define
the locations and
the boundaries of the
surfaces that make up the 3D object. The program has no knowledge of what the surface actually looks like, just where and what shape it is.
Texturing is the mechanism that we use to tell the 3D program what the surface
looks like. We take a bitmap (which could be a digital photo, drawing, texture, whatever) and "map it" onto the surface. Think here in terms of trying to wrap a gift. Take an arbitrary 3D shape, and try to wrap it in wrapping paper. It's real easy with a box, but with complex shapes like an airplane, or a bulldozer it's gets tougher. The 2D texture tends to distort as it gets wrapped onto the 3D shape. And if your piece of wrapping paper (texture) isn't quite the right size, you have to make adjustments during the wrapping process (you'll see this mapping process referred to as U/V mapping, and there are others too).
Now, so far we have a description of the 3D object, in a specific programs own unique "internal" representation. We still can't even see the thing. We've told our 3D program what shape the thing is (
where the surfaces are), and we've given it a texture (
what the surfaces look like). Now, to display the object on your screen, or print it to your printer, this internal representation must be converted to a suitable form. Your screen/printer displays/prints the image as line after line of pixels (usually called dots on a printer), each pixel being a certain specific color. This is, by definition, a bitmap.
If the "vector" representation of a door (to use your example) indicates that the
surface of the door should be 80.4 pixels wide, it must be rounded up or down to a full pixel - either 80 or 81 in this case. If the 3D program decides to go with 80, and your texture is 81 pixels, the result may not look as expected.
Typically, the 3D modelling is done with a modeling program like Sketchup, Blender or Metasequoia, and the textures are done using a paint program like MS Paint, Paintshop, etc. Paint programs operate in much the same manner as your screen/printer - they see row after row of individual pixels. As Wunwinglow points out, there are other programs (like Coreldraw) that function differently. They store information about the texture image as vector data - essentially 2D lines and shapes, that are filled with colors and/or patterns.
The benefit of vector graphics is that as you scale it up, the program actually redraws the lines, shapes, colors and patterns in the new scale, giving you more and more precise lines and shapes as you increase the scale. Bitmap based programs on the other hand start with a fixed bitmap. They have no information about what is being represented, so to enlarge an image all they can do is duplicate each pixel multiple times, and maybe make some guesses about how adjacent pixels might blend more naturally. But, the bigger you go, the grainier, and uglier it gets. Vector based programs however, store
a description of what is to be represented, and can thus present it fully and completely at any scale.
Now, here's the tricky part. Let's say you start out to develop a 1/72 scale model of a certain vehicle. You develop your 3D model, develop some textures, map the textures onto the model, and then run it through another program to unfold it into a 2D pattern, and presto, you have your model. At some point in this process you have to save your textures to a bitmap file. This may be in Jpeg, GIF, or BMP format, but a bitmap is a bitmap, and the resolution is now fixed. Once a bitmap, always a bitmap. Your final model pattern is also a bitmap. It may be wrapped in a PDF file, or saved in a JPEG file, but it's still a bitmap.
Now, let's say you decide to produce a larger, 1/33 scale model. If you take your final pattern (which is now a bitmap) and simply enlarge it, you are up against the limitations of enlarging bitmaps as described above. If you double the scale of the 3D model, your bitmap now has to cover
at least twice as much area, which it wasn't designed to do, and the 3D program simply does the bitmap enlargement for you, and you end up with much the same result.
If however, your texture is developed in a vector based program, you open up your original vector based file, increase the scale, and then save a new, larger bitmap that is suited to the new, larger model, with absolutely no loss of quality. Unfortunately, you have to now open up your 3D model and reapply the texture and adjust the mapping.
Even more tricky: If you think ahead, and make a high-resolution bitmap texture for your 1/72 model, planning ahead for a 1/33 model, you can go ahead and use the same texture file for both, and the 3D programs will take care of it for you, and what program you use to create the bitmap is six of one, half a dozen of the other.
Relating this to file formats
To extend this lesson a little further, since we're already almost there, the internal representation used by any given program (like Sketchup) can be thought of as it's native graphic file format. Take an object like a bicycle. Metasequoia has it's own, unique way of internally storing information about the shapes that make up the bicycle. Orbiter (a popular video game) uses a different internal representation for the same shapes, and Sketchup yet another internal representation. They are all describing the same object (a bicycle), and must all store the same basic information (lines, shapes, positions, colors, textures etc.), they just each have their own unique way of doing that. When you save the model to disk, each program typically uses it's own native format for saving the information. Most 3D programs support several common file formats, but none support all of them.
This is where translation (or conversion) programs like UMC2 or 3D Exploration come in. They have been built to understand many different file formats, from many other programs/sources, and can convert from one to the other with little or no loss of information. So, you can (almost) always use your favourite program, because there's (almost) always lot's of other people out there that have the same favourite program, and one of them has probably written a program to convert that other format to the format you need. And they (or somebody else) has probably written the program to convert it back again too.
I'm a firm believer that there's really no "best" program for anything. It's a question of what fit's your budget, does the program fit the way you think and work, and so on. Try out the popular ones, find the one you like best, then learn it inside and out. Read everything you can. Play with it. Become an expert with it.
And have fun,
Steve