Video Conference

video conference @ eBIZ Education Team


Need
If you want to try out this new technology for the experience, then of course it is worth it; and probably worth traveling to do it.
If you believe the point of video conferencing is to save traveling, then it is fundamentally silly to travel to use another video suite. After all, video conferencing is inferior in many ways to face to face meetings (e.g. no social or private business with others "on the side"), and must have a strong saving in time to be worthwhile.
But difficulties in booking can easily lead to only some sites being available at the time picked. If someone asks you to travel to a video conference, consider refusing: perhaps they would like to travel to your office instead?

Preparation
Before the conference:
1. Booking the conference Hall.
It is advisable to check and re-check the booking at both ends.
Conferencing Hall
Thus problems with the booking system from a user's viewpoint include:
o Even when all agree a booking has been made, the connection is not always made without prompting them.
o It is almost impossible to know whether a booking has been accepted. Confirmations may or may not be sent, may or may not be accurate. The long chain of people involved makes this very unreliable (e.g. only one end of a video conference will do the booking on behalf of all; they will go through their local contact, who will contact Edinburgh.) A failure at any point of this chain results in people not knowing the state of the booking.
o The public web record of bookings is not kept up to date and does not reflect what conferences are booked and what slots are free.
o The notation for sites in that record is not comprehensible by users. It doesn't use the normal names of the places connected, and doesn't provide a glossary.
(Organise a parallel computer link (as an equivalent to an OHP) if talks rather than discussion are to be presented. The video link will only transmit one video channel: typically a picture of the present speaker.
To give a talk, the equivalent of an OHP is needed to transmit "slides" to another monitor in each video suite. Audiences say they quickly get tired of hearing without seeing the speaker (this was the comment by students on a 10 minute monologue with slides I gave in one of our sessions), so the main video channel cannot be used for "slides" successfully.
This extra link is not (yet) provided as standard, but can be done by having a computer with an internet connection provided in every suite, linked perhaps by Net meeting. You are likely to have to organize this equipment yourself: certainly independently of booking the video conference. You need to:
o Arrange to have the hardware (computers) set up at every site for the conference. In a big room, you then need to have the computer display projected on a big screen so everyone at the site can see it.
o Arrange to have them connected to the internet there.
Video
o Decide how to link them e.g. if you use Net meeting, then all the machines need to be PCs.
o Decide how to prepare your "slides". PowerPoint is easy, maybe web pages. Too bad if you wanted to do slides by hand or using a photocopier.
o You will probably need to know the IP address of those machines (or rather the network ports in the rooms) and to tell the other participants what they are.
Agree and publish an agenda
Consider introducing yourselves in advance by another medium e.g. email, web pages.
Use email to prepare everyone for the meeting.
These might include:
o Every site should have a written list of phone numbers: those of every other video suite, the Edinburgh switching centre, and the phone extensions of local technical assistance.
o If a computer internet link is being used in parallel, then each site should have to hand a written note of their own IP address (to tell other sites as required).
o Every site needs someone familiar with the video controls: these cannot be learned simultaneously with having a meaningful conference. If you don't have an experienced user, then someone needs to practice in advance (see next section).

Setting
The controls are not effortlessly usable. Therefore:
Img
• You need to have a practiced person at each site to operate the controls, organizing their training if necessary.
• If possible the person "chairing" the session should not also be operating controls. Arranging for a practiced user
The controls are not effortlessly usable. Therefore at each site you
• Either need a user with previous experience of THAT suite (the controls are different at every site);
• Yyou need to arrange a little practice for a designated person.
Groups get restless very quickly when someone is practicing or fumbling while they wait (another student criticism of one of our cases): after all, they can't learn anything because it is not their hands on the controls.
So having someone turn up and do it for the first time with a group causes dissatisfaction and the perception of a bad meeting.
A new user can practice a lot of it without a connection (operating the cameras and looking at the result on a monitor), but the best thing is to book the conference 30 mins earlier and have one person at each site turn up then to practice and to check the arrangements. During such a setup, you could:
• Ask if they can hear you comfortably; and vice versa
• Ask if they can see you; find camera shots that THEY say suits them.
• Ask them to look you in the eyes (in their monitor) so you know what direction they are looking when they are looking at you.

Sound
The position of the microphones should be taken into account when positioning the participants. You cannot judge what sound you are transmitting (unless you have a sound meter).
You must ask the other end and believe what they say. The fact that you can hear OK is, unlike in face to face, no clue at all about what they can hear.
Img
QA should test the sound coming from speakers in different parts of the room

Room layout (preparation)
Having all the chairs facing one way, towards the cameras and monitors seems to work well.
Hall
One issue is giving everyone a good view of the screens (and being in view of the cameras). Another is that if a group are in a circle, it is easy for them to feel a group and the person at the far end to feel not part of it.

Visual resolution
Effective resolution is bad. What matters is the size of objects at the user's eye (in, say, centimeters per radian, or inches per degree). Thus it doesn't directly matter how big text is at the far end: a lot depends on the display at the receiving end.
To get the most out of a video channel, every user needs to be near enough to the screen that they can just or almost see the individual pixels or scan lines. However in many video suites, although the monitors look big, in fact users are much further away.
For instance, sitting at my office computer, the monitor fills 20-30 degrees of my field of vision, but in the video suite at Glasgow, it fills perhaps 5 degrees.
Video screen
Just as in giving a talk at a new place, you cannot be sure how big you need to make the text on your OHPs, so in video conferencing you cannot be sure what the display conditions will be at the far end (and you cannot see them yourself either); but our experience is that this is a concern.
1. Only one face can be recognised at a time. Wider shots show bodies, but not who they are. The camera should mainly focus on one or two people at a time and not just a distant view of all the participants. This allows the remote person or audience to gauge reaction etc. and feel "part" of the whole activity.
2. If you have name plates or hold up printed material, the letters need to be over 2 inches high (255 point print) in a shot framed to show a person.
3. It is useful to have a visualiser available. (You can then, but only then, use smaller print. Smaller means say 24 point, NOT 12 point.) I.e. Bring printed "slides": with font as big as OHPs require. (A "visualiser" is a "rostrum camera" i.e. lights and downward pointing camera set up to do closeups of bits of paper. Probably looks like an OHP with a video camera where the projector lens should be.)

Meeting
The Large scale :

considering the purpose of the meeting, and organising the overall joint task. In education, this will be the level of pedagogical success or failure.
The medium scale:

things you can do in any meeting to make it go better e.g. start with introductions, begin by agreeing an agenda.
The small scale:

issues of turn taking, asking the other end to give you a different camera shot (or not, and being frustrated).
Large scale: Organising the meeting
All the preparation that can help any meeting and/or tutorial apply. Basically, having a clear idea about the main purpose of the meeting, and having all participants prepared for it. Thus if it is to be a tutorial, the students need to have done the work and be prepared to present in some definite way.
Meetings
1. Agenda.

A definite agenda for the conference is useful and should be agreed and circulated beforehand particularly insofar as it informs participants about what each needs to prepare, unless it is so simple that no separate document is needed.
o Alternatively, an electronic agenda (e.g. a web page, a powerpoint screen) could be made available during the meeting if an extra internet connection (e.g. using Net meeting between PCs in every video suite) is being used. This would have the advantage that it could be edited during the meeting, yet still be shared by all participants.
2. Participants should have access to all the material for the conference and time to read it before the conference takes place. Materials which are on the Web can be accessed easily by both sites and shared, discussed etc. -- that is one method, but faxing paper can equally work for small numbers.
3. One recipe that works (has worked) is for the student to have written an essay, the tutor to have read and commented on it, and preferably to have sent the written comments in advance. Then the discussion can consist of going through the comments.
4. Another is for students doing group work to prepare a short presentation of their results or what they have done, including electing which student will represent the group. The tutor can then discuss these presentations.
5. But just as in face to face seminars, a general discussion may flop unless all participants know they will be speaking (and what about) and prepare some ideas to offer.
Running the meeting: medium scale social actions
All the things that help run any meeting and/or tutorial apply, but are more important.
1. Agenda.

A definite agenda for the conference is useful and should ideally be visible to all participants during the meeting.
Meeting
2. When using a long PowerPoint presentation, many overheads etc. it would help if the audience at the other site could occasionally see the lecturer/tutor instead of just hearing him/her.
Either organize a second channel (e.g. Net meeting over the internet, to give 2 screens of communication), or have the person in charge of the equipment at the speaker's end switch regularly between the visualize (shot of a slide) and a shot of the speaker.
3. Unless the group has already met before, it should begin by going round in turn with each person introducing them self, including a statement of what they hope to gain from this meeting.
o In multi-site conferences, it is important to go round each site at the start so that everyone gets at least a glimpse of the rest of the audience. Remember that you will then only see one other site at a time.
4. Each person should construct a nameplate in front of themselves. Few people can remember more than 2 names from introductions. However the lettering must be very large e.g. 144 point (1.5 inches high).
o In multi-site conferences, it is also important to have clear labels for each site, as the picture will jump between sites, which often look like anonymous rooms. The best solution is to have a caption inserted electronically on the outgoing image, as is now done by the University of Glasgow; otherwise a name plate with enormous lettering.
5. A good way to promote discussion, particularly if it is the first discussion the group has, is to ask each person to say how the topic relates to a personal experience.
Running the equipment: small scale social actions
You have to control the camera shots. And because (see below) this doesn't do everything you want, you have to do small scale social actions to compensate e.g. ask the other end to change the camera shot, nod in an exaggerated way to compensate for low resolution, etc.
Meeting
• What is wanted, but you can't have, is to control the cameras at the other end, just as in face to face you turn your eyes and head to see what you want when you want. This is not offered you currently.
• Because of this, you have to tell them what you want: normal tacit practices won't work. For instance, if their sound is too quiet it is no good talking louder. They hear fine, and won't talk louder to suit you, particularly if they have several people in their room who can hear each other fine.
• Probably it is best to begin by explicitly asking each other if you can hear well.
• Then, ask them to look at their monitor that shows your face(s): so you will know what it looks like when they are looking at you. In most setups, their eyes will not meet yours, but be looking downwards (cameras are often on top of the screens). You have to ask what "eye contact" will look like.
• The person controlling the shots probably needs to have little else to do, so they can concentrate on what is wanted and how to operate the controls.
• What the other end will want is both to see the room as a whole, and the speaker's face and reactions, and what the speaker is pointing to e.g. a slide on the visualiser. This isn't possible.
• Probably participants should train themselves to give feedback about being still "there" explicitly. Just as on the phone you have to say "uh uh" more often than face to face, so you probably need to do this on video conferences AND have the cameras show all bodies/faces periodically.
• Similarly, probably we should get in the habit of explicitly asking them to change the camera shot ("show me what the others are doing now").

3D Graphics

3d graphics


Introduction to 3-D Graphics

You're probably reading this on the screen of a computer monitor -- a display that has two real dimensions, height and width. But when you look at a movie like "Toy Story II" or play a game like TombRaider, you see a window into a three-dimensional world. One of the truly amazing things about this window is that the world you see can be the world we live in, the world we will live in tomorrow, or a world that lives only in the minds of a movie’s or game's creators. And all of these worlds can appear on the same screen you use for writing a report or keeping track of a stock portfolio.

How does your computer trick your eyes into thinking that the flat screen extends deep into a series of rooms? How do game programmers convince you that you're seeing real characters move around in a real landscape? In this short tutorial, we will tell you about some of the visual tricks 3-D graphic designers use, and how hardware designers make the tricks happen so fast that they seem like a movie that reacts to your every move.

robot

What Makes a Picture 3-D?

A picture that has or appears to have height, width and depth is three-dimensional (or 3-D). A picture that has height and width but no depth is two-dimensional (or 2-D). Some pictures are 2-D on purpose. Think about the international symbols that indicate which door leads to a restroom, for example. The symbols are designed so that you can recognize them at a glance. That’s why they use only the most basic shapes. Additional information on the symbols might try to tell you what sort of clothes the little man or woman is wearing, the color of their hair, whether they get to the gym on a regular basis, and so on, but all of that extra information would tend to make it take longer for you to get the basic information out of the symbol: which restroom is which. That's one of the basic differences between how 2-D and 3-D graphics are used: 2-D graphics are good at communicating something simple, very quickly. 3-D graphics tell a more complicated story, but have to carry much more information to do it.

3d
Take a look at the triangles above. Each of the triangles on the left has three lines and three angles -- all that's needed to tell the story of a triangle. We see the image on the right as a pyramid -- a 3-D structure with four triangular sides. Note that it takes five lines and six angles to tell the story of a pyramid -- nearly twice the information required to tell the story of a triangle.
For hundreds of years, artists have known some of the tricks that can make a flat, 2-D painting look like a window into the real, 3-D world. You can see some of these on a photograph that you might scan and view on your computer monitor: Objects appear smaller when they're farther away; when objects close to the camera are in focus, objects farther away are fuzzy; colors tend to be less vibrant as they move farther away. When we talk about 3-D graphics on computers today, though, we're not talking about still photographs -- we're talking about pictures that move.
2d picture
If making a 2-D picture into a 3-D image requires adding a lot of information, then the step from a 3-D still picture to images that move realistically requires far more. Part of the problem is that we’ve gotten spoiled. We expect a high degree of realism in everything we see. In the mid-1970s, a game like "Pong" could impress people with it’s on-screen graphics. Today, we compare game screens to DVD movies, and want the games to be as smooth and detailed as what we see in the movie theater. That poses a challenge for 3-D graphics on PCs, Macintosh, and, increasingly, game consoles like the Dreamcast and the Playstation II.

What Are 3-D Graphics?

For many of us, games on a computer or advanced game system are the most common ways we see 3-D graphics. These games, or movies made with computer-generated images, have to go through three major steps to create and present a realistic 3-D scene:

1. Creating a virtual 3-D world.
2. Determining what part of the world will be shown on the screen.
3. Determining how every pixel on the screen will look so that the whole image appears as realistic as possible.
Creating a Virtual 3-D World
A virtual 3-D world isn't the same thing as one picture of that world. This is true of our real world also. Take a very small part of the real world -- your hand and a desktop under it. Your hand has qualities that determine how it can move and how it can look. The finger joints bend toward the palm, not away from it. If you slap your hand on the desktop, the desktop doesn't splash -- it's always solid and it's always hard. Your hand can't go through the desktop. You can't prove that these things are true by looking at any single picture. But no matter how many pictures you take, you will always see that the finger joints bend only toward the palm, and the desktop is always solid, not liquid, and hard, not soft. That's because in the real world, this is the way hands are and the way they will always behave. The objects in a virtual 3-D world, though, don’t exist in nature, like your hand. They are totally synthetic. The only properties they have are given to them by software. Programmers must use special tools and define a virtual 3-D world with great care so that everything in it always behaves in a certain way.
What Part of the Virtual World Shows on the Screen?
At any given moment, the screen shows only a tiny part of the virtual 3-D world created for a computer game. What is shown on the screen is determined by a combination of the way the world is defined, where you choose to go and which way you choose to look. No matter where you go -- forward or backward, up or down, left or right -- the virtual 3-D world around you determines what you will see from that position looking in that direction. And what you see has to make sense from one scene to the next. If you're looking at an object from the same distance, regardless of direction, it should look the same height. Every object should look and move in such a way as to convince you that it always has the same mass, that it's just as hard or soft, as rigid or pliable, and so on.
Programmers who write computer games put enormous effort into defining 3-D worlds so that you can wander in them without encountering anything that makes you think, “That couldn't happen in this world!" The last thing you want to see is two solid objects that can go right through each other. That’s a harsh reminder that everything you’re seeing is make-believe.
The third step involves at least as much computing as the other two steps and has to happen in real time for games and videos.

How to make it Real?

No matter how large or rich the virtual 3-D world, a computer can depict that world only by putting pixels on the 2-D screen. This section will focus on just how what you see on the screen is made to look realistic, and especially on how scenes are made to look as close as possible to what you see in the real world. First we'll look at how a single stationary object is made to look realistic. Then we'll answer the same question for an entire scene. Finally, we'll consider what a computer has to do to show full-motion scenes of realistic images moving at realistic speeds.

A number of image parts go into making an object seem real. Among the most important of these are shapes, surface textures, lighting, perspective, depth of field and anti-aliasing.
Shapes
When we look out our windows, we see scenes made up of all sorts of shapes, with straight lines and curves in many sizes and combinations. Similarly, when we look at a 3-D graphical image on our computer monitor, we see images made up of a variety of shapes, although most of them are made up of straight lines. We see squares, rectangles, parallelograms, circles and rhomboids, but most of all we see triangles. However, in order to build images that look as though they have the smooth curves often found in nature, some of the shapes must be very small, and a complex image -- say, a human body -- might require thousands of these shapes to be put together into a structure called a wireframe. At this stage the structure might be recognizable as the symbol of whatever it will eventually picture, but the next major step is important: The wireframe has to be given a surface.
wireframe illustration-1
This illustration shows the wireframe of a hand made from relatively few polygons -- 862 total.
outline wireframe
The outline of the wireframe can be made to look more natural and rounded, but many more polygons -- 3,444 -- are required.
Surface Textures
When we meet a surface in the real world, we can get information about it in two key ways. We can look at it, sometimes from several angles, and we can touch it to see whether it's hard or soft. In a 3-D graphic image, however, we can only look at the surface to get all the information possible. All that information breaks down into three areas:
Color: What color is it? Is it the same color all over?
Texture: Does it appear to be smooth, or does it have lines, bumps, craters or some other irregularity on the surface?
Reflectance: How much light does it reflect? Are reflections of other items in the surface sharp or fuzzy?
One way to make an image look "real" is to have a wide variety of these three features across the different parts of the image. Look around you now: Your computer keyboard has a different color/texture/reflectance than your desktop, which has a different color/texture/reflectance than your arm. For realistic color, it’s important for the computer to be able to choose from millions of different colors for the pixels making up an image. Variety in texture comes both from mathematical models for surfaces ranging from frog skin to Jell-o gelatin to stored “texture maps” that are applied to surfaces. We also associate qualities that we can't see -- soft, hard, warm, cold -- with particular combinations of color, texture and reflectance. If one of them is wrong, the illusion of reality is shattered.
surface
Adding a surface to the wireframe begins to change the image from something obviously mathematical to a picture we might recognize as a hand.
Lighting
When you walk into a room, you turn on a light. You probably don't spend a lot of time thinking about the way the light comes from the bulb or tube and spreads around the room. But the people making 3-D graphics have to think about it, because all the surfaces surrounding the wireframes have to be lit from somewhere. One technique, called ray-tracing, plots the path that imaginary light rays take as they leave the bulb, bounce off of mirrors, walls and other reflecting surfaces, and finally land on items at different intensities from varying angles. It's complicated enough when you think about the rays from a single light bulb, but most rooms have multiple light sources -- several lamps, ceiling fixtures, windows, candles and so on.
Lighting plays a key role in two effects that give the appearance of weight and solidity to objects: shading and shadows. The first, shading, takes place when the light shining on an object is stronger on one side than on the other. This shading is what makes a ball look round, high cheekbones seem striking and the folds in a blanket appear deep and soft. These differences in light intensity work with shape to reinforce the illusion that an object has depth as well as height and width. The illusion of weight comes from the second effect -- shadows.
lightining
Lighting in an image not only adds depth to the object through shading, it “anchors” objects to the ground with shadows.
Solid bodies cast shadows when a light shines on them. You can see this when you observe the shadow that a sundial or a tree casts onto a sidewalk. And because we’re used to seeing real objects and people cast shadows, seeing the shadows in a 3-D image reinforces the illusion that we’re looking through a window into the real world, rather than at a screen of mathematically generated shapes.
Perspective
Perspective is one of those words that sounds technical but that actually describes a simple effect everyone has seen. If you stand on the side of a long, straight road and look into the distance, it appears as if the two sides of the road come together in a point at the horizon. Also, if trees are standing next to the road, the trees farther away will look smaller than the trees close to you. As a matter of fact, the trees will look like they are converging on the point formed by the side of the road. When all of the objects in a scene look like they will eventually converge at a single point in the distance, that's perspective. There are variations, but most 3-D graphics use the "single point perspective" just described.
vanishing point
In the illustration, the hands are separate, but most scenes feature some items in front of, and partially blocking the view of, other items. For these scenes the software not only must calculate the relative sizes of the items but also must know which item is in front and how much of the other items it hides. The most common technique for calculating these factors is the Z-Buffer. The Z-buffer gets its name from the common label for the axis, or imaginary line, going from the screen back through the scene to the horizon. (There are two other common axes to consider: the x-axis, which measures the scene from side to side, and the y-axis, which measures the scene from top to bottom.)
In the real world, our eyes can’t see objects behind others, so we don’t have the problem of figuring out what we should be seeing. But the computer faces this problem constantly and solves it in a straightforward way. As each object is created, its Z-value is compared to that of other objects that occupy the same x- and y-values. The object with the lowest z-value is fully rendered, while objects with higher z-values aren’t rendered where they intersect. The result ensures that we don’t see background items appearing through the middle of characters in the foreground. Since the z-buffer is employed before objects are fully rendered, pieces of the scene that are hidden behind characters or objects don’t have to be rendered at all. This speeds up graphics performance.
Depth of Field
another optical effect successfully used to create 3-D is depth of field. Using our example of the trees beside the road, as that line of trees gets smaller, another interesting thing happens. If you look at the trees close to you, the trees farther away will appear to be out of focus. And this is especially true when you're looking at a photograph or movie of the trees. Film directors and computer animators use this depth of field effect for two purposes. The first is to reinforce the illusion of depth in the scene you're watching. It's certainly possible for the computer to make sure that every item in a scene, no matter how near or far it's supposed to be, is perfectly in focus. Since we're used to seeing the depth of field effect, though, having items in focus regardless of distance would seem foreign and would disturb the illusion of watching a scene in the real world.
The second reason directors use depth of field is to focus your attention on the items or actors they feel are most important. To direct your attention to the heroine of a movie, for example, a director might use a "shallow depth of field," where only the actor is in focus. A scene that's designed to impress you with the grandeur of nature, on the other hand, might use a "deep depth of field" to get as much as possible in focus and noticeable.
vanishing point2
Anti-aliasing
A technique that also relies on fooling the eye is anti-aliasing. Digital graphics systems are very good at creating lines that go straight up and down the screen, or straight across. But when curves or diagonal lines show up (and they show up pretty often in the real world), the computer might produce lines that resemble stair steps instead of smooth flows. So to fool your eye into seeing a smooth curve or line, the computer can add graduated shades of the color in the line to the pixels surrounding the line. These "grayed-out" pixels will fool your eye into thinking that the jagged stair steps are gone. This process of adding additional colored pixels to fool the eye is called anti-aliasing, and it is one of the techniques that separate computer-generated 3-D graphics from those generated by hand. Keeping up with the lines as they move through fields of color, and adding the right amount of "anti-jaggy" color, is yet another complex task that a computer must handle as it creates 3-D animation on your computer monitor.
alias
The jagged “stair steps” that occur when images are painted from pixels in straight lines mark an object as obviously computer-generated.
antialias
Drawing gray pixels around the lines of an image -- “blurring” the lines -- minimizes the stair steps and makes an object appear more realistic.

Realistic Examples

When all the tricks we’ve talked about so far are put together, scenes of tremendous realism can be created. And in recent games and films, computer-generated objects are combined with photographic backgrounds to further heighten the illusion. You can see the amazing results when you compare photographs and computer-generated scenes.


example

This is a photograph of a sidewalk near the How Stuff Works office. In one of the following images, a ball was placed on the sidewalk and photographed. In the other, an artist used a computer graphics program to create a ball.
image a
Image A
image b
Image B
Can you tell which is the real ball? Look for the answer at the end of the tutorial.

Making 3-D Graphics Move
So far, we've been looking at the sorts of things that make any digital image seem more realistic, whether the image is a single "still" picture or part of an animated sequence. But during an animated sequence, programmers and designers will use even more tricks to give the appearance of "live action" rather than of computer-generated images.
How many frames per second?
When you go to see a movie at the local theater, a sequence of images called frames runs in front of your eyes at a rate of 24 frames per second. Since your retina will retain an image for a bit longer than 1/24th of a second, most people's eyes will blend the frames into a single, continuous image of movement and action.
If you think of this from the other direction, it means that each frame of a motion picture is a photograph taken at an exposure of 1/24 of a second. That's much longer than the exposures taken for "stop action" photography, in which runners and other objects in motion seem frozen in flight. As a result, if you look at a single frame from a movie about racing, you see that some of the cars are "blurred" because they moved during the time that the camera shutter was open. This blurring of things that are moving fast is something that we're used to seeing, and it's part of what makes an image look real to us when we see it on a screen.

aeroplane

However, since digital 3-D images are not photographs at all, no blurring occurs when an object moves during a frame. To make images look more realistic, blurring has to be explicitly added by programmers. Some designers feel that "overcoming" this lack of natural blurring requires more than 30 frames per second, and have pushed their games to display 60 frames per second. While this allows each individual image to be rendered in great detail, and movements to be shown in smaller increments, it dramatically increases the number of frames that must be rendered for a given sequence of action. As an example, think of a chase that lasts six and one-half minutes. A motion picture would require 24 (frames per second) x 60 (seconds) x 6.5 (minutes) or 9,360 frames for the chase. A digital 3-D image at 60 frames per second would require 60 x 60 x 6.5, or 23,400 frames for the same length of time.
Creative Blurring
The blurring that programmers add to boost realism in a moving image is called "motion blur" or "spatial anti-aliasing." If you've ever turned on the "mouse trails" feature of Windows, you've used a very crude version of a portion of this technique. Copies of the moving object are left behind in its wake, with the copies growing ever less distinct and intense as the object moves farther away. The length of the trail of the object, how quickly the copies fade away and other details will vary depending on exactly how fast the object is supposed to be moving, how close to the viewer it is, and the extent to which it is the focus of attention. As you can see, there are a lot of decisions to be made and many details to be programmed in making an object appear to move realistically.
There are other parts of an image where the precise rendering of a computer must be sacrificed for the sake of realism. This applies both to still and moving images. Reflections are a good example. You’ve seen the images of chrome-surfaced cars and spaceships perfectly reflecting everything in the scene. While the chrome-covered images are tremendous demonstrations of ray-tracing, most of us don’t live in chrome-plated worlds. Wooden furniture, marble floors and polished metal all reflect images, though not as perfectly as a smooth mirror. The reflections in these surfaces must be blurred -- with each surface receiving a different blur -- so that the surfaces surrounding the central players in a digital drama provide a realistic stage for the action

Fluid Motion
All the factors we’ve discussed so far add complexity to the process of putting a 3-D image on the screen. It’s harder to define and create the object in the first place, and it’s harder to render it by generating all the pixels needed to display the image. The triangles and polygons of the wireframe, the texture of the surface, and the rays of light coming from various light sources and reflecting from multiple surfaces must all be calculated and assembled before the software begins to tell the computer how to paint the pixels on the screen. You might think that the hard work of computing would be over when the painting begins, but it’s at the painting, or rendering, level that the numbers begin to add up.
Today, a screen resolution of 1024 x 768 defines the lowest point of “high-resolution.” That means that there are 786,432 picture elements, or pixels, to be painted on the screen. If there are 32 bits of color available, multiplying by 32 shows that 25,165,824 bits have to be dealt with to make a single image. Moving at a rate of 60 frames per second demands that the computer handle 1,509,949,440 bits of information every second just to put the image onto the screen. And this is completely separate from the work the computer has to do to decide about the content, colors, shapes, lighting and everything else about the image so that the pixels put on the screen actually show the right image. When you think about all the processing that has to happen just to get the image painted, it’s easy to understand why graphics display boards are moving more and more of the graphics processing away from the computer’s central processing unit (CPU). The CPU needs all the help it can get.
Back to the images of the ball. How did you do? Image A has a computer-generated ball. Image B shows a photograph of a real ball on the sidewalk. It’s not easy to tell which is which, is it?

Image Formats




Introduction


Graphic images can be used to enhance the look of Web pages as well as to provide content that supports the textual information on a page. When used judiciously, images can be attractive and informative; when used to excess, they can be distractive and bothersome. When choosing images to place on a page you need to make sure they support the purposes of the page and not detract from them. Plus, you need to make sure that the file sizes of images, many of which can become quite large, do not cause unnecessarily long download times.

GIF Image Format

GIF (Graphics Interchange Format) is the most widely supported graphics format. Pictures saved in this format have the .gif file extension. GIF format can display images in black and white, grayscale, or color. When used for color pictures the GIF format is limited to displaying up to 256 colors. Normally when a graphics program saves an image in GIF format the software uses (up to) 256 colors that best represent the colors in the picture. Because of the compression technique used for GIF images this format is best for pictures with spot colors rather than continuous colors. In other words, this is a good format for line drawings, logos, icons, text, and other images with discontinuous colors; it is not the best format for photographic images.

One of the concerns about using graphics on a Web page are the file sizes produced by the format. Larger file sizes mean longer download times and the longer the visitor must wait to view the page. In general, it is not the size of the GIF image that affects the size of the file, it is the number of different colors in the image. Therefore, the most effective way of reducing file sizes and reducing download times is to reduce the number of colors in the image.

Most graphic programs permit choices of the number of colors saved with an image. In Figure 5-1 are shown two save options for GIF files permitted by Adobe PhotoShop. The default option on the left uses the full complement of 256 colors. The color pallet for the image is shown in the bottom-right of the picture. It produces a file size of approximately 10.9 KB. On the right is the same image formatted with 16 colors. There are no noticeable differences between the two pictures. The one on the right, however, produces a file size of only 3.5 KB. If you are creating your own images, you should explore techniques that make your file sizes as small as possible without distorting the image or misrepresenting its colors.

Given below are some GIF images

GIF 64

GIF 32

GIF 64

GIF 64


GIF 256

Transparent Images

One version of the GIF format -- GIF89a -- has the capability of producing transparent images. You can specify one particular color in the image which is then rendered transparent when the image is displayed on the page. Most often this is the single background color in the picture. When set to transparent, the background disappears, leaving only the foreground image displayed against the page background. The following figure shows the transparent effect.

Both the standard and transparent images are created on a white background and are saved as GIF89a files. The "Transparent" image, however, has the color white selected as the transparent color. When the images are displayed on a textured background, the white background of the transparent image is rendered transparent to permit the page background to show through. Of course, if the background color of the page is the same as the background color of the image, then there is no need to make the picture background transparent.

Interlaced Images

Another feature of GIF89a format is its ability to produce interlaced images. Normally, when an image is loaded into the browser it is revealed a few lines at a time beginning at the top of the picture. If the file size is large and the connection speed is slow you see the picture revealed a little at a time until the complete picture is downloaded.

If you choose to save your images as interlaced then the entire picture is revealed at increasingly higher resolutions. That is, it is first revealed as a low-resolution version of the entire picture. As more of the picture is downloaded it becomes sharper and sharper as more details are added. Although the time taken to download an interlaced image is the same as for a regular image it often appears to download faster since the complete, although not final, image is viewable much quicker. Whether you use standard or interlaced images is more a personal preference than technical need.

Animated Images

Multiple single images can be packaged together to produce animated GIF images. These are image files containing two or more images that are revealed in a timed sequence.


A slightly different animation technique is used in other animation programs. Some GIF animators require the creation of separately saved GIF images. Each image represents a different frame of the animation. These individual GIF files are imported into the software and are converted into sequenced cells of the animation.

An animated GIF file is retrieved by the browser just like any other GIF file. When displayed in the browser the file produces the animation. Of course, if you are not particularly skilled in working with graphics or you do not have the patience to put together the series of pictures to be animated you can probably find animated GIF images on the Web that suit your needs.

PNG Image Format

A newer format that is increasing in popularity is PNG (Portable Network Graphic) format, pronounced "ping." This format is used for the same purposes as GIF; however, it produces smaller file sizes and faster loading times without loss of resolution quality. It can reveal interlaced images and produce them faster than GIF format.


PNG8


PNG 24

The most noticeable difference between PNG and GIF formats occurs with transparent images. PNG format permits up to 254 levels of transparency, allowing images to better blend with the background color or pattern of a page.

JPEG Image Format

The JPEG (Joint Photographic Experts Group) format is designed for storing photographic images with millions of colors at different compression rates. During compression, graphics programs use special algorithms to sample and render colors close to those in the original picture but without retaining full color information in order to minimize file sizes.

You normally have a choice of compression settings when saving pictures in JPEG format. Smaller file sizes normally mean greater loss of picture details. Still, with moderate compression you can display an image on screen that appears very similar in quality to the original picture. The four pictures below show the original image and three compressions along with resulting file sizes.You can notice a loss of sharpness in those images with higher compressions and smaller file sizes.

JPEG images at various compressions

For Web images that are displayed at normal 72 pixels per inch, compression percentages that reduce file sizes to as small as 1/8 to 1/4 of original file sizes still retain satisfactory visual precision.

JPEG images are saved as files with the .jpg extension. JPEG format does not support interlacing or transparency; plus, it is not a good format for text or line drawings since the precision needed to produce straight lines or hard edges is not as accurate with compression.


JPEG image with High Compression[Low Quality]


JPEG image with Low Compression [High Quality]

Difference between GIF, PNG and JPEG

The GIF Format

The GIF format is one of the most popular formats on the Internet. Not only is the format excellent at compressing areas of images with large areas of the same color, but it is also the only option for putting animation online (unless you want to use Flash or other vector-based animation formats, which typically cost more). The GIF89a format also supports transparency, and interlacing.

GIF files support a maximum of 256 colors, which makes them practical for almost all graphics except photographs. The most common method of reducing the size of GIF files is to reduce the number of colors on the palette. It is important to note that GIF already uses the LZW compression scheme internally to make images as small as possible without losing any data.

Transparency

As I mentioned above, the GIF format supports transparency. This allows a graphic designer to designate the background of the image transparent. This means that if you place a transparent GIF in a yellow table cell, the background color of that image will turn yellow.

Interlacing

The interlacing feature in a GIF file creates the illusion of faster loading graphics. What happens is that an image is presented in a browser in several steps. At first it will be fuzzy and blurry, but as more information is downloaded from the server, the image becomes more and more defined until the entire image has been downloaded. It's important to note that interlaced GIF files will usually be a bit larger than non-interlaced ones, so use interlacing only when it makes sense.

When to use them

Generally, GIF files should be used for logos, line drawings and icons. Avoid using it for photographic images, and graphics which have long stretches of continuous-tone in them. When you're designing GIF files, avoid using gradients and turn off anti-aliasing where possible to minimize the file size.

The JPEG Format

The JPEG format, with its support for 16.7 million colors, is primarily intended for photographic images. The internal compression algorithm of the JPEG format, unlike the GIF format, actually throws out information. Depending on what settings you use, the thrown out data may or may not be visible to the eye. Once you lower the quality of an image, and save it, the extra data cannot be regained so be sure to save the original.

Progressive JPEG's

Any JPEG file can be saved as a Progressive JPEG. This is very similar to the interlaced GIF. As with GIF, this presents a low-quality image to your visitor at first, and over several passes improves the quality of it. Some graphic editing tools allow you to specify the number of passes before the image downloads completely.

When to use

As a rule, the JPEG format should be used on photographic images, and images which do not look as good with only 256 colors.

The PNG format

The third, and newest, file format that's widely supported by the Web is PNG (pronounced Ping). PNG was developed to surpass the limitations of GIFs, and as a means by which developers can avoid having to worry about the patent licenses associated with other formats. PNG was designed to offer the main features of the GIF format, including streaming and progressive file formats. It also provides greater depth of color, catering to images up to 24 bit in color.

It's expected that support for PNG will be widespread in the near future, although it will never completely replace GIF, as it doesn't support animation.

Inside Modem

3d graphics


Introduction to Modem
Modem (from modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information.
The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from driven diodes to radio.
img
The most familiar example is a voiceband modem that turns the digital '1s and 0s' of a personal computer into sounds that can be transmitted over the telephone lines of Plain Old Telephone Systems (POTS), and once received on the other side, converts those 1s and 0s back into a form used by a USB, Serial, or Network connection.
Modems are generally classified by the amount of data they can send in a given time, normally measured in bits per second, or "bps". They can also be classified by Baud, the number of distinct symbols transmitted per second; these numbers are directly connected, but not necessarily in linear fashion (as discussed under Baud.)
A modem (a modulator/demodulator) lets you connect your computer to a standard telephone line so you can transmit and receive electronically transmitted data. It is the key that unlocks the world of the Internet and its World Wide Web, commercial online services, electronic mail (E-mail), and bulletin board systems (BBSes).

Types of Modems
Depending upon how your computer is configured and your preferences, you can have an external, internal or PC modem card. All three types work the same way, but each has its advantages and disadvantages.
External modem
This is the simplest type of modem to install because you don't have to open the computer. External modems have their own power supply and connect with a cable to a computer's serial port. The telephone line plugs into a socket on the rear panel of the modem.
Because external modems have their own power supply, you can turn off the modem to break an online connection quickly without powering down the computer.
Another advantage over an internal modem is that an external modem's separate power supply does not drain any power from the computer. You also can monitor your modem's connection activity by watching the status lights.
img
Internal modem
Most internal modems come installed in the computer you buy. Internal modems are more directly integrated into the computer system and, therefore, do not need any special attention. Internal modems are activated when you run a communications program and are turned off when you exit the program. This convenience is especially useful for novice users.
Internal modems usually cost less than external modems, but the price difference is usually small. The major disadvantage with internal modems is their location: inside the computer. When you want to replace an internal modem you have to go inside the computer case to make the switch.
img
PC Card modem
These modems, designed for portable computers, are the size of a credit card and fit into the PC Card slot on notebook and handheld computers. These modems are removed when the modem is not needed. Except for their size, PC Card modems are like a combination of external and internal modems.
These devices are plugged directly into an external slot in the portable computer, so no cable is required other than the telephone line connection. The cards are powered by the computer, which is fine unless the computer is battery-operated. Running a PC Card modem while the portable computer is operating on battery power drastically decreases the life of your batteries.
img

How Modem Works?
When a modem first makes a connection, you will hear screeching sounds coming from the modem. These are digital signals coming from the computer to which you are connecting being modulated into audible sounds. The modem sends a higher-pitched tone to represent the digit I and a lower-pitched tone to represent the digit 0.
At the other end of your modem connection, the computer attached to its modem reverses this process. The receiving modem demodulates the various tones into digital signals and sends them to the receiving computer.
Actually, the process is a bit more complicated than sending and receiving signals in one direction and then another. Modems simultaneously send and receive signals in small chunks. The modems can tell incoming from outgoing data signals by the type of standard tones they use.
Another part of the translation process involves transmission integrity. The modems exchange an added mathematical code along the way. This special code, called a checksum, lets both computers know if the data segments are coming through properly.
If the mathematical sums do not match, the modems communicate with each other by resending the missing segments of data. Modems also have special circuitry that allows them to compress digital signals before modulating them and then decompressing them after demoduating the signals. The compression/decompression process compacts the data so that it can travel along telephone lines more efficiently.
Modems convert analog data transmitted over phone lines into digital data computers can read; they also convert digital data into analog data so it can be transmitted. This process involves modulating and demodulating the computer’s digital signals into analog signals that travel over the telephone lines.
In other words, the modem translates computer data into the language used by telephones and then reverses the process to translate the responding data back into computer language.
What is the difference between digital and analog signals?
A computer performs its tasks by turning on and off a series of electronic switches represented by the numerical digits of 0 and 1. A 0 is the code for off, and a 1 is the code for on. Combinations of these digital codes represent text, computer commands, and graphics inside the computer. By comparison, the telephone works by sending sounds in a continuous analog signal sent along an electronic current that varies in frequency and strength.

Internet Radio




Introduction to Internet Radio

A college student in Delhi listens to a disc jockey in New York play the latest rapso (calypso rap) music. A children’s advocacy group unites its geographically diverse members via private broadcast. A radio listener hears an ad for a computer printer and places an order immediately using the same medium on which he heard the ad. All of this is possible with Internet radio, the latest technological innovation in radio broadcasting since the business began in the early 1920s.

Internet Radio
Internet radio has been around since the late 1990s. Traditional radio broadcasters have used the Internet to simulcast their programming. But, Internet radio is undergoing a revolution that will expand its reach from your desktop computer to access broadcasts anywhere, anytime, and expand its programming from traditional broadcasters to individuals, organizations and government.

Freedom of the Airwaves
Radio broadcasting began in the early ‘20s, but it wasn’t until the introduction of the transistor radio in 1954 that radio became available in mobile situations. Internet radio is in much the same place. Until the 21st century, the only way to obtain radio broadcasts over the Internet was through your PC. That will soon change, as wireless connectivity will feed Internet broadcasts to car radios, PDAs and cell phones. The next generation of wireless devices will greatly expand the reach and convenience of Internet radio.
Uses and Advantages
Traditional radio station broadcasts are limited by two factors:
• the power of the station’s transmitter (typically 100 miles)
• the available broadcast spectrum (you might get a couple of dozen radio stations locally)
Freedom of the Airwaves
Internet radio has no geographic limitations, so a broadcaster in Kuala Lumpur can be heard in Kansas on the Internet. The potential for Internet radio is as vast as cyberspace itself (for example, Live365 offers more than 30,000 Internet radio broadcasts).
In comparison to traditional radio, Internet radio is not limited to audio. An Internet radio broadcast can be accompanied by photos or graphics, text and links, as well as interactivity, such as message boards and chat rooms. This advancement allows a listener to do more than listen. In the example at the beginning of this article, a listener who hears an ad for a computer printer ordered that printer through a link on the Internet radio broadcast Web site. The relationship between advertisers and consumers becomes more interactive and intimate on Internet radio broadcasts. This expanded media capability could also be used in other ways. For example, with Internet radio, you could conduct training or education and provide links to documents and payment options. You could also have interactivity with the trainer or educator and other information on the Internet radio broadcast site.
Internet radio programming offers a wide spectrum of broadcast genres, particularly in music. Broadcast radio is increasingly controlled by smaller numbers of media conglomerates (such as Cox, Jefferson-Pilot and Bonneville). In some ways, this has led to more mainstreaming of the programming on broadcast radio, as stations often try to reach the largest possible audience in order to charge the highest possible rates to advertisers. Internet radio, on the other hand, offers the opportunity to expand the types of available programming. The cost of getting “on the air” is less for an Internet broadcaster (), and Internet radio can appeal to “micro-communities” of listeners focused on special music or interests.

Creating an Internet Radio Station
What do you need to set up an Internet radio station?

• CD player
• Ripper software (copies audio tracks from a CD onto a computer’s hard drive)
• Assorted recording and editing software
• Microphones
• Audio mixer
• Outboard audio gear (equalizer, compressor, etc.)
• Digital audio card
• Dedicated computer with encoder software
• Streaming media server

Getting audio over the Internet is pretty simple:
1. The audio enters the Internet broadcaster’s encoding computer through a sound card.
2. The encoder system translates the audio from the sound card into streaming format. The encoder samples the incoming audio and compresses the information so it can be sent over the Internet.
3. The compressed audio is sent to the server, which has a high bandwidth connection to the Internet.
4. The server sends the audio data stream over the Internet to the player software or plug-in on the listener’s computer. The plug-in translates the audio data stream from the server and translates it into the sound heard by the listener.
There are two ways to deliver audio over the Internet: downloads or streaming media. In downloads, an audio file is stored on the user’s computer. Compressed formats like MP3 are the most popular form of audio downloads, but any type of audio file can be delivered through a Web or FTP site. Streaming audio is not stored, but only played. It is a continuous broadcast that works through three software packages: the encoder, the server and the player. The encoder converts audio content into a streaming format, the server makes it available over the Internet and the player retrieves the content. For a live broadcast, the encoder and streamer work together in real-time. An audio feed runs to the sound card of a computer running the encoder software at the broadcast location and the stream is uploaded to the streaming server. Since that requires a large amount of computing resources, the streaming server must be a dedicated server.