Disk Storage

Primary file organization: how files are physically stored on disk; where the read/write head has to go to read the file. You want to mess with this as little as possible! Find your data a good home and leave it there. There’s only one way to do this per file: you will store/arrange your records by one particular field (ID, SSN, LName, etc.)

Secondary file organization: auxiliary means of accessing data; indices! You can do all of these you want! Index by City, index by SSN… you basically list the records and their physical locations in order of whatever field suits your needs.

Primary file organization (Chapter 13!)

Things to remember when working with disks :

  1. Disks offer random access, read by block (LBA: Logical Block Address — usu. starts at 0, goes to n-1, where n = # blocks on disk)
  2. We still use tape backup because tape is the cheapest storage per bit!
  3. reading by bits or records is inefficient! We read by blocks!
  4. seek time: time to move head to block
  5. latency (a.k.a. rotation delay): time to spin disk to right positiontion under head
  6. block transfer time: time to move block from disk to head to memory (or back, for write) — usually takes the most time
  7. placing blocks contiguously reduces seek time and latency, but that’s beyond scope of this course, so we will concentrate on reducing the number of block reads and writes
  8. Double buffering allows us to read Block 1, let CPU start beating on it while we load Block 2 into the other buffer. Then the moment CPU gets done beating on #1, it turns to the other buffer and beats on #2, no waiting! This speeds up the total process by allowing processing to happen simultaneously with some data transfer
  9. Non-spanning: records can’t be broken up across blocks. This wastes some block space
  10. Blocking factor: how many records we can fit per block
  11. Fixed-length records waste space, but they are much more efficient to process and are what we will usually use.
  12. Read takes one block transfer; write usually requires two transfers, unless we luck out and the block we want is sitting in memory. Systems may allow pegging: we designate certain blocks as worth keeping around in the buffer.
  13. Choose your primary file organization to make most efficient the operations most important to you (search, insert, update, delete)
    1. Heap: unordered! INSERT by APPEND, records ordered by time created.
      1. Inserting takes 1-2 block reads (find last block, see if there’s room, write here or create new block to write).
      2. Searching is a pain, requires b/2 block reads on average; requires reading all b blocks on search of non-existent file.
      3. DELETE: must first search (avg. b/2). Then must decide what to do with the empty space! Instead of physically deleting the record and either Swiss-cheesing the disk or having to haul records in to backfill, you can just flag the record with a deletion bit. Odds are we’ll use downtime to compress the disk to fill the holes (garbage collection).
    2. Physical sort: ordered on some oft-used field (like EmpID, SSN, ZIP…)
      1. INSERT: pain! You’ve got to move everyone over to make room for new records… or you create overflow file in heap, deal with searching the whole overflow until we have time to do garbage collection during downtime.
      2. search: easier on the sort field (so you use this method when you do a lot of searches on that particular field)
      3. DELETE: again, sped up. Leave flag on DELETEd record. Note that DELETE could allow us to stick in INSERTs if there’s room… but making the computer consider that possibility usually eats more time than just sending every INSERT to the overflow.
      4. Note that physical sort allows binary search! Go to the middle, check whether we need to go below or above, keep halving until we hit target! Binary search will take average log2(b) block reads.
  14. Hashing: way of mapping things on disk, reducing the range of numbers; mod of primary key often used;  good academic topic that we’re not spending a lot of time on!

Secondary File Structures (Indexing! Chapter 14!)

  1. Making index will save time in most cases
  2. Primary Index: built on primary (unique) key
  3. Clustering index: orders in chunks (all the Bobs, all the Marys, etc.)
  4. Primary Index is another table, a separate file, with two fields: the primary (candidate) key and the block number in which the record with that primary key value is stored. Primary key may be the ordering key for the primary storage; doing a binary search on this index is still shorter than doing a binary search on the disk itself (fewer blocks to church through).
  5. Primary index allows binary search! Zoom! Searches now take log2(#blocks in Primary Index) + one more block read from disk.
  6. Remember, the primary index may still be huge (imagine Blogger’s primary index on usernames), but it won’t be as huge (use as many blocks) as the actual database
  7. Dense index: every record in database gets entry in index.
  8. Sparse index: not dense!
  9. A primary index can be sparse: since it uses the primary key, it can simply list the first or last key value (the anchor value) of each block on disk. This is good!
  10. Secondary index uses some key other than the ordering field of the primary file organization. Since the value of the key in this index has nothing to do with the block number, we need to have an index entry for every record.
  11. If search matters, create an index!
  12. You probably still have to read the index from disk. If you can keep the index in memory for lots of opeations, you really speed things up.
  13. Binary search tree: every child to left smaller, every child to right larger; efficiency comes from minimizing the depth of the roots (number of levels necessary = ROUNDUP((log2(n)+1)) where n = # records)
  14. B-tree: index scheme based on binary search tree: idea is that since we live in the world of disks and have to read entire blocks, we might as well pack as much into the block as we can
  15. B+-tree: internal nodes do not store pointer to the data block, which means there’s room for more index values and thus larger fan-out; only the leaf nodes have records paired with block pointers, so every search must go to the bottom of the tree. But the B+-tree’s bigger fan-out gives an advantage!
  16. So, B-tree, you probably need to go to the bottom leaf, but you might get lucky. B+-tree always goes to the bottom leaf. (Somewhere there’s a forumla for this!)
  17. B+-tree also has the advantage that you can sequentially read the bottom row and get every record! B-tree, you’ve got to jump all over to get that complete list.

Next Time: Optimization!

Advertisements