From: Dennis on

David,

Your Comment: The determination of whether or not those tables were
actually normalized depends on the chosen definition of the entity being
modelled in the table. I would consider the 200-field table you mention later
to be unquestionably denormalized, even though I know nothing at all about
the content/function of those fields.

Response: This is a very common statement for people who do not know the P
& C Insurance industry. The 100 to 200 field master record (depending upon
the vendor) contained ONLY the common and non-recurring policy information
between the different lines of businesses (type of policies). The insured's
name and address information is not even included in these field as that
information is in another table.

Actually, when you examine the data at the “policy” level for different
lines ( Personal Auto, Homeowners, Renters, Work Comp, Commercial Auto,
General Liability) you would be surprised to find it is actually quite
similar. It is not until you get to the “coverage” and insured “object”
level that the different lines vary dramatically.

There is a lot of policy information that is never seen by the public. Just
off the top of my head some common policy information is reinsurance
information, cancelation and reinstatement status (the cancellation &
reinstatement history is maintained in a separate table), statistical report
to ISO and each individual state, premium accounting (not Acct Rec.) with
written, earned, unearned, and inforce premiums, renewal offers tracking,
voluntary audit information, physical audit tracking. You could break all
this information into it their own tables, but why? So much of the above
information is inter-related (like effective and expiration date) that
breaking in into separate tables just slows down data access and increase
complexity which just increas

Once of the client I worked was a state owned insurer of last resort. If
you could not find insurance anywhere else, you could purchase it from this
company. The company was less then a year old when I started working there.
There were located in a city where there were a lot of banking expertise but
very few little insurance expertise. Their staff had all sort of banking
experience, but no insurance experienced people. The first thing I did was
sit down the vendor and go over their system. I learned that system in less
than a week. It was simple to learn not because I that brilliant, but
because there are only so many ways you can build an insurance system. And if
you understand insurance, you can easily understand a new system. (Talk to
an auto mechanic – it does not take them long to learn a new “normal
passenger” car [I'm excluding the exotic engines] because there are only so
many ways to build a “normal passenger car”.) The vendor commented that they
were glad I was there because they had been trying to teach the banking
people and AS/400 people for about a year and no one really understood the
system. Again, that is not because the company did not have smart people or
people that lacked experience. It was because their people lacked insurance
inexperience. I had to give the insurance company's CFO the reinsurance
premium accounting journal entries for their financial books. This was not
because she was dumb (as a matter of fact she was quite brilliant), it was
because she did not have insurance accounting experience, which is a quite a
bit different from normal accounting entries.

But I went through all that just to say the head of the company's IT
department thought the same thing you do (he also came from a banking
background). So, he hired some database design consultants / experts to
review the database's design, who again did not understand insurance. (Had
the understood insurance, they could have taken a quick look and realize the
database was in pretty decent shape.) Instead, they gather all of the data,
all of the relationship, all of the interdependencies, and did there thing.
Guess what, they came up with some minor suggestions but no major changes,
which is what I told the CIO before he started this effort. But oh well.
There is where the experience comes in.

Also, as I stated in the other discussion on this subject (which I surprised
you missed as you are commenting in that discussion also), I've worked on 12
different vendor's insurance systems over the years. Those system have been
written DECADES apart with totally different technology and totally different
development groups. At one extreme we have the flat file system running on a
mainframe and at the other end we have a Window's base object oriented client
server system using an SQL database. And they have all had a 100 to 200 field
policy master table. (The more capable systems had the larger number of
fields). It is interesting that you would disagree with all those people
with all that experience. But whatever.


Your comments: That sounds like a table that has a bunch of fields that are
used only for a single record type, so that an auto insurance policy has one
set of fields, but a renter's insurance policy has a different set of fields.

Response: Well, it may sound like that but again this is the common
statement for a newbie in the P & C Insurance field.

Then normal way I've seen the policy master designed is to a common policy
master fields where all common fields (all 100 to 200 depending upon the
system) are stored in a single table. Then for each line of business (such
as auto or renters), you have a line of business policy master file that
contains those fields specific to that line of business. This table is an
extension of the common policy master table. In a good design, you simply
don't store line specific line fields in the policy master table, you store
them in the line specific policy mater files. One of the reasons the policy
master record is so big is there is a whole lot of “behind the scenes” data
that is being stored that the policy holder never sees. (See above).

At the coverage and insured object level, the story is totally different.
While structure of the coverage tables mirrors the policy master and line
specific policy master, the coverage master table is actually quite narrow.
That is because there is not a whole lot of common information (other than
effective and expiration dates, policy accounting [not Acct / Rec. info],
statistical accounting, coverage limits, and reinsurance) at the coverage
level. Most of the coverage information is stored in different line specific
coverage and insured objects tables (two or more tables). These tables are
extension of the coverage master table and children of the line specific
policy master tables.

The homeowner coverage is actually comprised of multiple coverage tables
because a homeowner policy can cover multiple lines of business. For
example, home owner policy can coverage fire and property damage (1 line of
business), general liability (another line of business), theft (another line
of business), work comp for house hold help (another line of business).
These were just the lines of businesses that I could think of off the time of
my head. A full implementation of a homeowner policy is extremely involved
and very complicated.

But, back to your example. Your statement is incorrect. The personal auto
policy master, coverage and insured object tables contain the auto specific
coverage information, while the renter's policy, coverage and insured object
tables contain the renter's specific coverage information. The common
information for both the auto and renter's policy is stored in the policy
master table.

Your comment: Any time you're using some fields for some records and not
for others, it's an indication to me that the entity has been misdefined, and
should probably be broken into at least two tables, with a narrow header
table and a long child table, where each row stores what was formerly a field
in the wide table.

Response: You are preaching to the choir here! I totally agree.

However, we are going to have to disagree on the “narrow header table”
issue. The header table is as long as the data model / structure requires it
to be. If is it short, it short. If it is long, then it is long.

Your comment: All that said, my conclusion could be wrong for any
particular application.

Response: I agree with this point.

Your comment: But "fields are expensive, rows are cheap" is a generalized
rule of thumb, not a hard-and-fast law of nature. It allows for exceptions
for certain purposes, but is a starting point for evaluating a schema design.

Response: I now understand John's logic behind “Fields are expensive, rows
are cheap” and, given the context, I fully agree with it.


Dennis

From: David W. Fenton on
=?Utf-8?B?RGVubmlz?= <Dennis(a)discussions.microsoft.com> wrote in
news:69F33AC7-4BC9-43ED-9EAC-7266290D9FE8(a)microsoft.com:

> I can see where disk caching would help in a sequential process,
> but does disk caching really help in a randomly accessed database
> during data entry?

Yes, because every modern database use b-tree traversal of indexes
to locate records.

Next question?

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
From: David W. Fenton on
=?Utf-8?B?RGVubmlz?= <Dennis(a)discussions.microsoft.com> wrote in
news:4A00A4E2-29D3-45C7-B3FC-511E60140DFE(a)microsoft.com:

> Hmmm, I see your point and kind of agree with it. My back ground
> in on large and midrnage computers where it is nothing to have a
> 200 field, 30K record.
>
> However, I realize that Access is a different beast and I'm having
> to learn to adjust for it restrictions. Thanks for the insight.
> Just more to think about. But then I learn something new also.

Schema design should be as independent of the database engine as
possible, so Access is *not* different in any way, shape or form. I
would recommend *as a starting point* the same normalized design for
any database engine.

We are at least 15-20 years past the days when the developer of a
desktop database app needed to worry about the physical storage of
data. It's only in edge cases where any modern developer is going to
start considering it in the design of the database schema.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
From: Dennis on
David,


Your comment: Schema design should be as independent of the database engine
as possible, so Access is *not* different in any way, shape or form. I would
recommend *as a starting point* the same normalized design for any database
engine.

Response: Your comment is self contradictory. Instead of saying “Schema
design should be independent of the database engine” you stated “Schema
design should be as independent of the database engine as possible.” The “as
possible” statement by definition states that thing will be different between
database engines. Which, throws you whole argument out the door.

Yes, Access is different from DB/400, and Oracle, and D3. It is very
similar, but it is not the same. From what I've read Access had not
implemented the entire SQL language. Also, Access does not support blobs
very well where Oracle does. From what I've read, it is highly recommended
that we not store blobs in Access database. Rather, we store the path and
file name to the blob and let DOS/Windows store the binary file in the
specified directory. From what I've read Oracle has no problems storing
blobs in their records.. I don't know if DB/400 stores blobs. I know D3
does not store blobs, but jBase might. I know D3 support multi-valued list
and I think Oracle does also, which are very useful in exploding part on a
fully assembled unit (ie car door). Access does not support multi-valued
list. So much for “Access *not* different in any way, shape, or form”.

Even as a newbie, I knew that statement was false. Even a prima facia
examination of that statement indicates it is false. Are you saying Access
is not different in any way, shape of from DB/400, Oracle, My SQL or SQL
server? I give you one difference. The maximum record size on the
different databases are different. The maximum table and database sizes are
different. Those two differences alone negate your statement.

I also know that DB/.400 does things differently that Access because I tried
some DB/400 approaches on Access and found that that approach did not work on
Access. Given that DB/400 does not have a 4k record size limit as Access
does, that along could possibly cause a different schema. Access would have
multiple tables where DB/400 would have one.

I also remember that Access has a relatively small (from a commercial
insurance policy standpoint) maximum table / database size. DB/400 and
Oracle don't have that same limitation. To me, this would definitely
influence the schema design. Assuming I had one table (which you would in
commercial policies) that exceeded the size limitation of Access, I would
have to design around that limitation. In DB/400 and Oracle, I would not
have to.

Each database engine has different capabilities, enhancements, different
levels of SQL implementation, and limitations than the next. What it appears
that you are saying is we should design our schema to the lowest common
denominator and ignore any additional capability offered by the particular
database. As soon as you move away from this position, you then have to
design different schema for different engines. Granted, those changes might
be slight. But as soon you design something different for the different
engines, you violated your statement that Access is not different.

I will agree that the general schema should be very similar for the
different database engines, they will not be the same. The design of a Pick
– D3, DB/400, Oracle, and Access schema would be different mainly because of
the different capabilities of the different database engine.



Your comment: We are at least 15-20 years past the days when the developer
of a desktop database app needed to worry about the physical storage of data.
It's only in edge cases where any modern developer is going to start
considering it in the design of the database schema.

Response: You are absolutely correct. That is why I was wondering about
John's comment. I thought that he was implying something about Access's
speed. It turns out I simply misunderstood his statement.

Dennis

From: Dennis on
David,

I stated "The first data entry might access the first record, the next the
1,000th
record, then next on the 5,000th record, and so on and so on. So, unless the
entire table is cached, does it really help? "

Your comment: Yes, because every modern database use b-tree traversal of
indexes to locate records?


My response: So what if the modern database uses b-tree traversal of
indexes to locate the records. What does that have to do with physically
reading the record from the disk on a server?

I can see where the speed of determining the location of the particular
record would be assisted by this, but knowing where the record is and getting
the record are two totally different things. Once the disk address / page /
whatever is deteremine the db engine still has to go to disk to get the
record unless the entire table is in cach or in memeory on the local machine.

So once again, how does all this caching and b-tree traversal speed up the
physical reading of a record that is not in memory. the database engine still
has to go to disk or worse yet - over the network to get the desired record.

If I've got it wrong (which I may well have), please explain where I missing
the poing.

Thanks,

Dennis