I have measured speeds now and for a dataset of 35.000 rows it starts out at 4-5 ms per row added to the dataframe and increases to about 9 ms at the end of the loop. For bigger data sets with millions of rows this will take a very long time. The response from the database takes less than a second and the rows contain 5 fields. I am using this method in the loop:

row = query.fetch([None])
df.loc[len(df)] = row

What do you mean by COS and how can I do this comparison?

Thanks for the answer Eduard.

I might be looking in the wrong place for a solution to my problem. I am working with larger data sets, up to 15-20 million rows, and using the python pandas library for data manipulation and analysis. To populate a dataframe I fetch rows from the query object one row at the time and this is very slow. After the dataframe is populated there are no issues with speed so I suspect that this process is the culprit. Is there a more efficient way of doing this?