Fancy algorithms are slow when n is small, and n is usually small.
There are only two hard things in Computer Science: cache invalidation and naming things.
XML is like violence — if it doesn’t solve your problems, you are not using enough of it.
If you don’t know how compilers work, then you don’t know how computers work.
Software is easy to make, except when you want it to do something new.
When faced with a problem you do not understand, do any part of it you do understand, then look at it again
An underlying problem with artificial intelligence that I have personally experience in my forty years in this area is that as soon as an AI technique works, it’s no longer considered AI and is spun off as its own field (for example, character recognition, speech recognition, machine vision, robotics, data mining, medical informatics, automated investing.)
If it’s your decision, it’s design; if not, it’s a requirement.
Inside every large, complex program is a small, elegant program that does the same thing, correctly.
On database indices efficiency
While researching why
SELECT COUNT(*) is slow on PostgreSQL, i found this nice rule-of-thumb for B-tree (the most common type of database index) efficiency:
10% selectivity is the minimum selectivity necessary for a b-tree index to be helpful.
Where selectivity = unique index values / total number records