David Foard · Dec 26, 2019

Performant index on date field

Is there a way to get a good performing index on a date field? I have tried various date property indexes and the query plan is always in a pretty high range. Below are query plan result values I have observed:

   StartDate > '2019-12-01'  --cost = 699168
   StartDate = '2019-12-21'  --cost 70666
   StartDate between '2019-12-21' and '2019-21-28' --cost = 492058


The query plans above were for type %TimeStamp.

0 307
Discussion (7)3
Log in or sign up to continue

While the cost may be high, the real question might be better focused on the performance.  Relative cost is just that, relative. 

What does the rest of the query look like?

What is the time to first row, time for all of the data? 

Does the query plan utilize the index on the StartDate column?

Thanks for the confirmation.
By this trick, (applied on coordinates)  I could beat Oracle Spacial by magnitudes

That's what POSIXTIME does for you under the hood, so no need to require all your queries to be aware of this frugal innovation ;-)

A few recommendations to improve performance:

#1) %TimeStamp  is a nice thing for display tough rather inefficient for indexing as it is mostly close to a unique index.
If you just check for Date as you describe you better use an index on a calculate property of type %Date
instead of a string, your date values in index are just integers !!! 
that makes your index slim and fast.

check for = is the easiest one, therefore, the fastest
check for > is the slowest as you have no stop condition
between is somewhat better as you have to check > and <  but you have a start / stop delimited range.

And integer compare is by dimensions faster than any string compares with any available processor chip.
And you do lots of them!

in reality, better check the number of Global references and lines you execute.
That's the truth.  Costs are good guesswork based on generic parameters out of tune table but rather for the dimension.


relative cost is only useful when you compare different plans for the same query. By itself "relative cost" means nothing.

To look into query performance we need to see
a) Query itself
b) Execution plan
c) Sources of all classes / tables from the query, so that we see how fields are defined, indexes that are available and selectivity of fields.

In addition to the suggestions made earlier (to provide more context such as full query & plan), you may also want to consider POSIXTIME as the data type for your date / time columns. It encodes dates as a simple 64-bit integer, which makes for efficient storage, indices and queries.

Hi David,

I am not sure of any handy solution but I would like to suggest one "frugal innovation" here. It might not fit the exact requirement but it can be of some help if changing the table structure is permitted.

if possible change the date field to integer/float and convert the whole date time to seconds .

while storing the data first convert the date to horolog , it would come like 65380,73921.1234

Then convert the date to seconds by (65380 * 24 * 3600) + 73921.1234

index on this column is going to perform better. Let me know if its possible in your case.