From: BGB / cr88192 on

"Daniel Pitts" <newsgroup.spamfilter(a)virtualinfinity.net> wrote in message
news:omDVm.43117$kY2.43032(a)newsfe01.iad...
> BGB / cr88192 wrote:
>> "Daniel Pitts" <newsgroup.spamfilter(a)virtualinfinity.net> wrote in
>> message news:yeuVm.50745$cX4.33730(a)newsfe10.iad...
>>> This is almost an English question, rather than a programming question,
>>> because I'm searching for the suitable name of a class.
>>>
>>> I'm creating some value types for a simulation that I'm working on.
>>>
>>> In the past, I've used a vector as a representation of a point, but I've
>>> realized that they have subtle differences in properties. A vector
>>> represents a delta, or a change, where a Point represents a fixed
>>> location.
>>>
>>
>> beyond this, personally I don't really see the point of distinguishing
>> them.
>> I once asked a teacher this, as he was making the distinction, and he
>> went on and never really gave a solid answer. "it is because it is I
>> guess"...
> The reason is semantics. Why distinguish a complex number from a 2d
> vector? You can certainly create a single class which will handle both
> cases (there is even some overlap). But you don't want your API clients
> passing in a vector when you expected a complex number.

except that a complex has different operations, and a significantly
different meaning, than a 2D vector...

this is not really the case with vectors vs points, since this is, IMO, more
of a "splitting hairs" distinction...


>>
>> a point is a location, as contrasted from a vector relative to the
>> origin?...
> A point *has* a location. A vector has a magnitude and direction. Although
> both points and vectors are representable in the same coordinate spaces,
> the semantics of those representations are very different.

what is a point? (x, y)
what is a vector? <x, y>

magnitude and direction can be calculated from a vector, but they are not,
in effect, the vector (as I see it).

IMO, this would be similar to claiming that <p, q, r> (in spherical coords)
are also a vector, since they also be said to have direction and magnitude,
but spherical coords are a different entity from a vector (although, they
are convertible, which was a big part of my trick of shoving a quat into the
form of 3 angles...).


>>
>> I don't know, as far as the math or behavior goes there is not a whole
>> lot of difference.
> Not in the math, but in the meaning.
>

what is "meaning" apart from structure and behavior?...


>>
>>> From this realization, I've determined:
>>>
>>> 1. A vector can be multiplied or divided by a scalar; A point can not.
>>> 2. A vector has a unit value; A points does not.
>>> 3. Two vectors can be added or subtracted. Two points can be only
>>> subtracted (which results in a vector)
>>> 4. A vector can be added to and subtracted from a point, resulting in
>>> a point.
>>>
>>> Among other things.
>>>
>>
>> there are lots of operations which apply...
>> there are, infact, more operations than one may care to think about
>> (unlike, say, real numbers, vectors would seem to be a lot more "open
>> ended" in these regards).
>>
>> your example list doesn't even include dot and cross product, FWIW, ...
> If I'd wanted to make an exclusive list, it wouldn't have included "Among
> other things."

ok, only that these are fairly basic operations...

"among other things" would presumably be more in reference to things like
the wedge product, line/plane intersections, projection, ...


>>
>>
>>> So, for my implementation, vectors are implemented in terms of a
>>> rectangular coordinate and points are implemented in terms of a vector.
>>> The vector value of a point is the delta from the origin.
>>>
>>
>> personally I see vectors as being in a rectangular coordinate system as
>> more a property of the coordinate space than of the vectors,
> True, but the vectors have to be implemented some how, and choosing
> rectangular coordinates simplify a lot of the common operations (addition,
> dot product, etc...)

yes.

yep, so the operations have a "sane" meaning regardless of space...

then one doesn't have to deal with the mental arbitrarities, like for
example that a vector in a torroidial space can be both a straight line and
a helix at the same time. from one point of view, we have a straight line,
and from another, a helix...

nevermind that with a slight bit of trickery, one can pass off a toroid as a
sphere, and very possibly no one will notice...

no one need figure out that this square wrapping RPG-style world map
couldn't possibly be on a sphere...


>> since the operations/... remain themselves mostly unchanged even though
>> the "space" is different (although I guess one could argue that a cross
>> product in a right-hand space is different from a left-hand space,
>> depending on ones' interpretations and implementation).
> Technically, you end up with an object which is not a vector, I forget the
> name of it, but it is basically a double-sided vector. a union of both
> left and right handed. Much like square root radicals, its convention
> from which we choose ignore one of the results.

yeah.


>>
>> the problem though with the cross-product, is that its behavior in a
>> left-hand space would be indeterminate apart from defining (either
>> implicitly or explicitly) a plane along which the space is mirrored (I
>> would have to verify, but it seems intuitively that there would be
>> multiple possible left-hand cross-products WRT a given right-hand cross
>> product depending on the spatial projection).
> Kind of diverged there from the topic :-) Anyway, I'm dealing with 2d
> space, so cross-product is not a concern.

yes, ok, reasonable enough...


>>
>>
>>
>>> Now, I have also figured out that there is the same difference between
>>> an "Angle" and a "____", but I don't know what to call "____" (and I'm
>>> not sure which one is the vector analogue vs the point analogue).
>>>
>>
>> this is only true if the angle is assumed to be an absolute rotation.
>>
>> more so, you don't state if you are talking here about 2D or 3D, where
>> rotation in 3D is a different beast from 2D (and in turn, 4D is a
>> different beast from 3D, challenging what one may think of
>> "rotation"...).

> Ah, I did forget to mention I was working in a 2d space.

yep...

well, this does simplify a few things, FWIW...


>>
>> one can instead think that an angle is essentially like a very limited
>> delta rotation, rather than an absolute rotation (though, an angle can
>> fill this role in 2D).
>>
>> granted, in 3D, people often end up using euler angles (ZXZ and ZXY being
>> common, or informally, as "yaw, pitch, roll"), although, personally, I
>> have found that I rather dislike euler angles much beyond the "simple"
>> case, and instead prefer nearly any other option.
>>
>> one idea I had made use of recently was to embed an axis-angle into the
>> form of 3 angles (sort of like euler angles), but which allows allows a
>> 1:1 conversion between this and unit quaternions. more so, in the simple
>> case, it can identity map to a yaw angle.
>>
>> the idea:
>> Quat <-> Axis-Angle, and Axis-Angle <-> 3Rot, where 3Rot encodes the axis
>> in the pitch and roll fields, and the angle in yaw (pitch=roll=0 then
>> corresponds to the Z axis, allowing for ordinary rotation in this case).
>>
>> the main reason to do something like this was to "retrofit"
>> quaternion-based rotations onto a mass of code designed for the use of
>> euler angles (in particular, the Quake2 engine, where a flag indicates
>> which system is in use, but I had by mistake called this flag ZXZ, later
>> realizing that this system was not, in fact, ZXZ...).
> Off topic again are ya? ;-)

well, all this is plenty relevant for 3D, and it had been said prior that
the topic was not 3D...


>>
>>
>>> I'm thinking that maybe angles are the fixed (point analague) value. The
>>> vector analogue, eg the delta, might be called "rotation", but I'm not
>>> sure. An angle would be implemented in terms of a "rotation" from some
>>> determined origin (east for example), and rotation will be defined as a
>>> scalar value (probably using radian units).
>>>
>>> Does this all make sense, or am I too sleep deprived :-)
>>>
>>
>> this seems counter-intuitive...
>>
>> but, normally angles are not relative to some particular direction, but
>> along some particular axis (such as Z). they can also be regarded as a
>> relation between 2 coordinate systems along said axis, where usually an
>> angle of '0' would mean a 1:1 mapping between these systems.
> In 2d spaces, absolute angles are relative (by convention) to the positive
> x axis. Although, this is something which I will be abstracting away, so
> the convention need not apply. A user can start with axial unit vector,
> and then rotate clockwise or counter-clockwise by whatever amount.

ok, or in a 2D space, one could just as easily assert that the angle is
along an invisible axis "upwards".

I think this is sort of like how quats work in 3D:
the axis exists in 3D, but the rotation itself falls outside 3D space.

any further and one finds themselves rotating along planes...

working backwards, we could assert, once again, that infact the 2D space is
rotating (absent reference to a magic Z axis).

hmm...


>> which way is "east" or "forwards" is then a secondary issue, although,
>> normally the orientation of: X=right, Y=forwards, Z=up is common.
>>
>> this is also apparently a difference between Quake1 and Quake2, where in
>> Quake1 (and in the non-renderer code in Quake2), this is followed, but
>> within the renderer Y=right and -X=forwards...
>>
>> nevermind that Q2 can't keep it consistent which way angles go (leading
>> to frequent angle flipping in the renderer), or even whether clockwise or
>> counter-clockwise is the front-facing vertex order (so, the character
>> models use CCW vertex ordering and the BSP models use CW ordering, ...).
>>
>>
>> all this stuff made it a pain to add on new features (mostly for my own
>> learning experience), such as real-time stencil lighting and shadows
>> (as-in Doom3), which required lots of hacking and fiddling (and still
>> looks not very good as the Quake2 maps are designed for radiosity, and
>> hence are not well lit and have low-saturation textures, whereas the
>> Quake1 maps look much better, but tend to be bright and over-saturated
>> with Quake2's lightmapping...).
>> similar goes for rigid body physics, which turned out disappointingly
>> not-very-good...
> You really like going off on tangents, don't you :-) I actually don't
> mind, its always interesting to me to hear more about these kinds of
> things, even if they aren't relevant to the discussion at hand

topic is a terribly narrow track...


>>
>>
>>> So, what should my class names be? I'm thinking of calling them Vector,
>>> Point, Rotation, Angle, but Rotation seems to not quit fit. FixedAngle
>>> vs RelativeAngle is a bit too wordy, and there is probably a more
>>> concise and exact concept that I'm missing.
>>>
>>
>> dunno really...=
>>
>>
>> I have never actually really done any of this via classes.
>> I have older code, which mostly used float pointers and functions.
>>
>> some of my newer code uses a vector system based on top of compiler SIMD
>> intrinsics, which uses an API designed partly based on GLSL.
>>
>>
>> vec3 u, v, w;
>>
>> u=vec3(1, 0, 0);
>> v=vec3(0, 1, 0);
>> w=v3cross(u, v);
> That's not very different from using classes, just the syntax changes
> slightly.
>

yes, ok...

it includes these types:
vec2, vec3, vec4, quat

mat3/mat4 have been partly implemented, but were not really finished (not as
much motivation since matrices don't map nicely to SIMD intrinsics, unlike,
say, vectors or quats...).


note that these types were not wrapped in structs, mostly as I was figuring
that the compiler was a bit stupid (the struct wrapper might cause it to
start copying it around via memory plain operations rather than SIMD
operations, ...).

yes, MSVC is not very smart, it sees a struct and automatically uses "rep
movsb" to copy it and stuff like "movups xmm0, [rax+0]" to access it...

(even then, struct-wrapped SIMD would be likely to still be faster than
float arrays and scalar operations...).

the cost though is that, since they are not wrapped, just typedef'ed, I
can't go and add overloaded operators for them, which is kind of lame.


but, thus far, the only real uses this has seen have been in
micro-optimizing things, since the float-arrays strategy has a lot more
weight (since it is a lot more commonly used in my codebase).

I then ended up adding operations like vec3vf() and vfvec3() to allow quick
conversion between them.

float *fa;
vec3 v;
....
v=vec3vf(fa);
....
fa=vfvec3(v);


struct-wrapped floats are likely to be slowest, as I have seen how much a
trivial detail, such as a few unecessary vector copies, can impact
performance in some cases (though, usually in the innards of a rendering
loop, where one may also end up trying to skimp on things like 'sqrt' as
well, ...).


>
> --
> Daniel Pitts' Tech Blog: <http://virtualinfinity.net/wordpress/>


From: bartc on

"Daniel Pitts" <newsgroup.spamfilter(a)virtualinfinity.net> wrote in message
news:nmDVm.43116$kY2.19196(a)newsfe01.iad...
> bartc wrote:
>>
>> "Daniel Pitts" <newsgroup.spamfilter(a)virtualinfinity.net> wrote in
>> message news:yeuVm.50745$cX4.33730(a)newsfe10.iad...
>>
>>> In the past, I've used a vector as a representation of a point, but I've
>>> realized that they have subtle differences in properties. A vector
>>> represents a delta, or a change, where a Point represents a fixed
>>> location.
>>>
>>
>>> So, what should my class names be? I'm thinking of calling them Vector,
>>> Point, Rotation, Angle, but Rotation seems to not quit fit. FixedAngle
>>> vs RelativeAngle is a bit too wordy, and there is probably a more
>>> concise and exact concept that I'm missing.

>> Do you have different classes for distance, length, angle, area, volume,
>> mass, time, velocity, acceleration,....?
> distance is length, and yes, length will have a class. area/volume/mass
> won't be needed in my simulation. Time will have a class. velocity and
> acceleration will both have classes.
>>
>> It sounds at first sensible to create dedicated types for each (and then
>> for each combination...), but perhaps a better suggestion is to keep the
>> types straightforward and put any such information into the variable
>> names. At least until your project is fully working.

> Ah, right, because compilers can catch misuse based on variable names. Oh
> wait, they can't.
>
> If it is a noun in your domain, it most likely should be a class in your
> model.

(I've never thought about variables being nouns, verbs or adjectives;
grammar isn't my strong point.)

There've been a couple of discussions on comp.programming about units and
how best to deal with them in a language, especially one like C++ which
encourages you to do this.

From what I remember, things can get hairy...

With Vector and Point, you have to stop and think about whether a particular
X,Y (or X,Y,Z) quantity is one or the other, and then you come across a use
which can apply to either but hasn't been allowed for in the type model.

When I worked on a graphics project, I used types such as Point, Matrix,
Edge, Transformation (although I called a Point type, completely wrongly, a
Coord, and used it for everything that used an unadorned X,Y,Z value).

--
bartc


From: Dan on

> > A point *has* a location. A vector has a magnitude and direction. Although
> > both points and vectors are representable in the same coordinate spaces,
> > the semantics of those representations are very different.
>
> what is a point? (x, y)
> what is a vector? <x, y>
>
> magnitude and direction can be calculated from a vector, but they are not,
> in effect, the vector (as I see it).


Actually... a vector as used in engineering and science has magnitude
and direction. If it doesn't, then... most engineering books ever
written need to be thrown away.

> IMO, this would be similar to claiming that <p, q, r> (in spherical
coords)
> are also a vector, since they also be said to have direction and magnitude,
> but spherical coords are a different entity from a vector (although, they
> are convertible, which was a big part of my trick of shoving a quat into the
> form of 3 angles...).

A line originating at the cooridinate axes and pointing towards
<p,q,r> can be considered a vector, but the point in space that such
a vector points to is certainly not iself a vector (in math or in
science). It has no magnitude and it has no direction and thus, by
definition, it is not a vector.

I make no comment as to how the op should write his code, as points,
vectors, lines, etc.. all have a lot of overlap in how you do
calculations with/on them. But.... when all is said and done, a
point is still not a vector.

Cheers,
Dan :-)




From: Pascal J. Bourguignon on
Dan <dantex1(a)aol.com> writes:

>> > A point *has* a location. A vector has a magnitude and direction. Although
>> > both points and vectors are representable in the same coordinate spaces,
>> > the semantics of those representations are very different.
>>
>> what is a point? (x, y)
>> what is a vector? <x, y>
>>
>> magnitude and direction can be calculated from a vector, but they are not,
>> in effect, the vector (as I see it).
>
>
> Actually... a vector as used in engineering and science has magnitude
> and direction. If it doesn't, then... most engineering books ever
> written need to be thrown away.
>
> > IMO, this would be similar to claiming that <p, q, r> (in spherical
> coords)
>> are also a vector, since they also be said to have direction and magnitude,
>> but spherical coords are a different entity from a vector (although, they
>> are convertible, which was a big part of my trick of shoving a quat into the
>> form of 3 angles...).
>
> A line originating at the cooridinate axes and pointing towards
> <p,q,r> can be considered a vector, but the point in space that such
> a vector points to is certainly not iself a vector (in math or in
> science). It has no magnitude and it has no direction and thus, by
> definition, it is not a vector.
>
> I make no comment as to how the op should write his code, as points,
> vectors, lines, etc.. all have a lot of overlap in how you do
> calculations with/on them. But.... when all is said and done, a
> point is still not a vector.

Indeed. But it can _be_ _represented_ by a vector from the origin,
hence the confusion. Or it could also be represented by something else.


--
__Pascal Bourguignon__
From: BGB / cr88192 on

"Dan" <dantex1(a)aol.com> wrote in message
news:d18cd432-2d86-4091-8de2-5417fcf5f2ed(a)a10g2000pre.googlegroups.com...
>
>> > A point *has* a location. A vector has a magnitude and direction.
>> > Although
>> > both points and vectors are representable in the same coordinate
>> > spaces,
>> > the semantics of those representations are very different.
>>
>> what is a point? (x, y)
>> what is a vector? <x, y>
>>
>> magnitude and direction can be calculated from a vector, but they are
>> not,
>> in effect, the vector (as I see it).
>
>
> Actually... a vector as used in engineering and science has magnitude
> and direction. If it doesn't, then... most engineering books ever
> written need to be thrown away.
>

we can consider that, in these books, there is an implicit conversion going
on.
(here V` will mean V with the little hat)

if for V` it is referenced simply as V, then it actually means V=|V`|,
hence, the magnitude is implicitly extracted from the vector, but magnitude
!= vector.

direction refers usually either to the vector itself, to a unit vector in
the same direction, or (IMO sloppily) to an angle.

regardless, the vector remains <x,y>, regardless of whether or not people
believe it "contains" a magnitude and direction, or if we just have a very
lightweight notation for extracting it and/or composing vectors from them (I
will draw an analogy to the use of type-casting in many PL's...).


but, this much is more philosophical than practical...



> > IMO, this would be similar to claiming that <p, q, r> (in spherical
> coords)
>> are also a vector, since they also be said to have direction and
>> magnitude,
>> but spherical coords are a different entity from a vector (although, they
>> are convertible, which was a big part of my trick of shoving a quat into
>> the
>> form of 3 angles...).
>
> A line originating at the cooridinate axes and pointing towards
> <p,q,r> can be considered a vector, but the point in space that such
> a vector points to is certainly not iself a vector (in math or in
> science). It has no magnitude and it has no direction and thus, by
> definition, it is not a vector.
>

<p,q,r> is not a point.

rather, we could say
<x,y,z> <= <p,q,r> ==> <x,y,z> = <r*cos(p)*sin(q), r*sin(p)*sin(q),
r*cos(q)>

hence, <p,q,r> would also have a magnitude and direction, but would be in a
different coordinate space, but it is not a vector in a more strict sense of
the word (since it is not in the form <x,y,z>...).

likewise would go for <theta, rho> in 2D, even though this actually would
"contain" magnitude and direction...


> I make no comment as to how the op should write his code, as points,
> vectors, lines, etc.. all have a lot of overlap in how you do
> calculations with/on them. But.... when all is said and done, a
> point is still not a vector.
>

as see it, the distinction is moot, since a point is still relative from the
origin.

(x,y,z)=<x,y,z>+(0,0,0)
<x,y,z>=(x,y,z)-(0,0,0)

all fairly moot, IMO...