This is great progress, a big improvement (in my opinion) on the issues that have been proving contentious in the 163.6 and 163.7.
I think there is still room to make things more intuitive and useful in normal workflows--specific suggestions below.
The command window can be saved as a query using Edit - Save as Query. Invoking the command creates a new query component, copies the current text from the command window, then selects the created query in the Project pane and puts the focus into the Project pane list so that clicking Enter opens the new component.
What could be more intutive than that? Great design.
Same with Edit - Save as Map, also brilliant.
As for having the number of records back, that makes more difference than I would have thought, mainly psychological I suppose, even massive tables sit on firm ground.
But the biggest deal is View - Filter - Filter Fetched Records Only. Many, many thanks for this option. It works really well. The implemenatation is perfect.
Some ideas for possible improvements in the same direction. Hopefully some of this is already planned (or something like it).
(1) An option for ordering, just like 'View > Filter > Filter fetched records only' for filtering. So 'View > Order > Order fetched records only'. Again defaulting to on.
With the option switched off, ordering would scan and filter the whole table, regardless of size, and then show the N highest- or lowest-ranked records from the whole set. (N the batch size, currently 50000.)
With a massive table, this sometimes allow us to ask a silly question, and get a silly answer in return. But yes, let us exercise poor judgement from time to time--only, to mitigate its effects, please allow the "full fetch" (so to speak) to be cancelled at any time. On cancellation it would be nice to retain all records fetched so far, and for these to then be sorted.
Why do I think this is crucial? I would use it in almost every workflow, certainly for everything important.
Typically what I want to do, when interrogating a dataset--whether looking for errors on some criterion calculated in SQL and written to a field, or just when getting to know new data--is to choose a field, sort on it to see the largest values for that field in the whole dataset, then reverse the sort order to see the smallest values. In either case, I may see values outside expected magnitudes, or more large or small values than I would have expected; eye I might notice rough natural breaks that will be useful in the next steps. Those two clicks, and only two screenfuls of records, can tell me an enormous amount about the data, often everything I immediately need to know. Often sorting on a second field will also be informative, but not always. In any case the ordering must be on the whole table. (With the infrastructure now built in 9 I would always do one or more sorts on just 50000 records first, of course. That is very powerful too, as an initial option.)
(By the way this is why comments in the thread for 163.6 which focussed on the impossibility of inspecting all records in a very large table, while true, went wide of the mark. Inspecting every record is seldom the point. If I have chosen my criterion or criteria wisely, then what I usually want to see is the range--provided it is the absolute range. A range within an arbitrary subset is not often so useful--to put the same thing another way.)
That is the first thing, for me essential. By comparison the others would be 'nice to have'.
(2) An option to adjust the batch size used when less than all records are returned. 50000 will not suit all users, all networks, all datasets, all workflows, etc.
(3) Possibly, for clicking on the 'fill record' (at the bottom, measning 'there is more data') to do something. Because it looks as if it should do something? Well, that is no doubt a bad reason. And given something like (1) above, perhaps this would be unnecessary.
But FWIW, for native projects only, clicking on the fill record might do either of these things:
(a) Fetch all records (no batch limit)--at least until I press cancel. When all records have been scanned (or I cancel), start to apply applicable filters and ordering, if any, to the records than have been fetched; if no filtering or ordering is in force, just display the first screen of records (exactly as for builds before 163.6) and leave use to PgDn, Ctrl-End or whatever to navigate as best we can.
(b) Fetch another batch of records. There are problems with this, which Adam has covered. All the same, it might be possible to do something sensible and efficient here just for native data sources (which would be enough). To my mind it would not matter, if no ordering is in place, if the next batch(es) overlapped with previous batch(es) to some degree, or returned updated data for records previously fetched. That is SQL, that is live data.
But again, only (1) is a feature which I think is truly needed.