A Microsoft Office (Excel, Word) forum. OfficeFrustration

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » OfficeFrustration forum » Microsoft Access » Setting Up & Running Reports
Site Map Home Register Authors List Search Today's Posts Mark Forums Read  

Managing Large Tables



 
 
Thread Tools Display Modes
  #11  
Old August 8th, 2007, 08:58 AM posted to microsoft.public.access,microsoft.public.access.forms,microsoft.public.access.queries,microsoft.public.access.reports,microsoft.public.access.tablesdbdesign
Jamie Collins
external usenet poster
 
Posts: 1,705
Default Managing Large Tables

On 7 Aug, 20:02, "Tony Toews [MVP]" wrote:
Jamie Collins wrote:
I would never delete data. You never know when someone might want it.


That's what backups are for. Forcing users to include AND IsDeleted =
'N' in *every* join with *every* table will not make you popular.


Please explain that cryptic answer in a bit more detail.


That would be my pleasure.

You said, "I would never delete data." What design pattern can we
infer about your tables if you never remove any rows from them?

To assume your applications are never successful enough for any of
their data to reach obsolescence would be unkind g. All your tables
could be valid-time state ('history') tables -- start- and end date
pair on each row, sequenced primary keys via table-level CHECK
constraints, etc -- with VIEWs to return only rows in the current
state (end_date IS NULL or similar 'magic date' value). Maybe a
similar transaction log table design with summary VIEWs. Most people
conclude that temporal models are overkill for most tables unless
there is a specific business need (I assumer you get authorization
from your clients before implementing your 'never deleted data'
pattern) and anyhow too complex for Access/Jet's SQL syntax e.g. you
are limited one SQL statement per PROC whereas a sequenced update
requires five statements (two INSERTs and three UPDATES).

The most common pattern I see is to add a column (is_deleted, Status,
etc) and this is the pattern I was alluding too (sorry if I was being
obscure). Again, you could provide summary VIEWs (WHERE is_deleted =
'N') and take steps to point users at these VIEWs (e.g. revoke read
permissions from the base tables while providing WITH ACCESSOWNER
OPTION stored procs for CRUD operations or only writing updatable
VIEWs LOL!) but again the typical response is to take no action and
let users figure out for themselves that they have to eliminate
notionally (but not physically) 'deleted' rows in every join condition
and search condition.

This pattern poses a problem for keys. You may want to reuse a natural
key value that has been deleted (as distinct from 'un-deleting' the
original entity with that key value). You may think that adding the
is_deleted column to the key will help but this means you will only be
allowed one key value in each state but what you actually want is an
unlimited number of entities in the is_deleted = 'Y' state sharing a
key value.

In the same spirit of sharing, if you never delete data then what
design patterns to you actually use to handle this? TIA.

Jamie.

--


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 09:02 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 OfficeFrustration.
The comments are property of their posters.