Subscribe to this thread
Home - Cutting Edge / All posts - Manifold System 9.0.168.4
adamw


10,447 post(s)
#26-Oct-18 17:31

9.0.168.4

manifold-9.0.168.4-x64.zip

SHA256: 51c36bbac5000107f8c392a99fb008d79ef487d0b6d9a91eced29c25af3b9b04

manifold-viewer-9.0.168.4-x64.zip

SHA256: 8441fb9a1443ffbbb4d5e6787c1247a66dbdd273011fd1e6a612294aabd8f94d

adamw


10,447 post(s)
#26-Oct-18 17:32

The focus of the build was styles and the UI related to styles. We completed most of the things we planned to add for styles for the current series of cutting edge builds with the exception of: asymmetric lines, lines with arrows, curved labels. All of the mentioned items are being worked on and will soon appear in cutting edge builds.

Changes

Area format sample have been changed to a triangle to make it easier to tell from a color sample, and is no longer widened to occupy all of the available space.

Controls in the style editing dialog have been rearranged to better use screen space. Preview is moved to the left. Pickers for colors, sizes and rotations are moved to right below the preview (hopefully this makes it clearer that the fill / stroke colors - controllable both in the dialog as well as in the Style pane - are main colors and colors for individual style elements use them as defaults instead of the other way around). Grids showing format samples have thin row handles.

Preview in the style editing dialog allows changing canvas color by clicking color picker button in the top right.

Buttons with format samples in the Style pane have been resized and rearranged to occupy less space while being more legible. Buttons for colors, sizes and rotations have been reduced, while buttons for symbols have been slightly increased.

The Style pane no longer allows selecting multiple format parameters simultaneously and instead allows selecting full format for area / line / point / label. When this happens, the list of format values at the bottom displays and edits full formats (this allows for very quick thematic formatting). All commands used by the list of format values can be applied to full formats: Apply Palette sets fill color to colors from the palette and stroke color a shade darker, Interpolate interpolates fill color, stroke color, size and rotation simultaneously, Darken / Lighten / Grayscale adjust both fill color and stroke color, Reverse reverses everything.

Rendering vector data with thematic formatting optimizes for cases when all values for a format are the same (this frequently happens when thematic formatting is applied to full formats: eg, colors change, but sizes stay the same).

Selecting a button with an already established thematic formatting in the Style pane no longer automatically computes field statistics (to show how many records are in each interval / value, this information is not vital and given that computing it can take a while we are now postponing it). To compute statistics, click Refresh Statistics in the toolbar above the list of format values.

The Style pane for images no longer has a button for a "pixel" format sample which previously had to be clicked to show the rest of controls and automatically shows image formatting.

Color picker dropdown automatically sorts colors by hue on startup and includes a button to switch back to sorting by name.

Font picker dropdown includes choices for bold / italic / bold italic versions of the default font (Tahoma).

Symbol picker dropdowns include numerous new symbols and display symbol names.

Format picker dropdowns include a scrollbar and support common scroll keys (up / down, pageup / pagedown, home / end).

Format picker dropdowns display tooltips for format choices. The tooltip includes the name of the choice and its group.

Color picker dropdown shows hex codes (#RRGGBB) for color values on the right side in the wide mode as well as in the tooltip.

Point styles can use SVG paths. Normal graphics only.

There are many (hundreds) new point symbols based on SVG paths from Font Awesome.

New exterior option for points and labels: halo. Supported parameters: stroke width (default 3 pt). Works for both normal and reduced graphics.

Also

(Fix) The query engine no longer sometimes misoptimizes an OUTER join with a complex join expression.

(Fix) Testing connection to a database or web server no longer continues to use specific login and password if those were provided earlier and the dialog has then been switched to use integrated security.

(Fix) Reading TIFF files with tiles in separate planes no longer sometimes fails.

Reading data for an ad-hoc query on SQL Server automatically recognizes geometry values and seamlessly converts them to Manifold geometry.

Reading data for an ad-hoc query on GPKG / Spatialite automatically recognizes geometry values and seamlessly converts them to Manifold geometry.

Reading Spatialite geometry supports TinyPoint geometry subtype.

Connecting to SQL Server caches SRID values for geometry columns in tables and views in MFD_META. This allows not recomputing these values for each session.

End of list.

BerndD

162 post(s)
#26-Oct-18 20:27

Looks great.


Organizations that want to adapt to CHANGE are using products that can adapt.

www.yeymaps.io

rhowitt52 post(s)
#26-Oct-18 20:12

Just tried this query against my SQLserver:

SELECT Field

,[AcresFromMap]

,[Geometry]

FROM [Geom_of_Fields]

where [Company_ID] = '561'

Got an error message "Invalid Type", which refers to the Geometry column.

I can run the query without Geometry and the query runs correctly.

Your release notes say this should work.

Dimitri


7,413 post(s)
#26-Oct-18 20:33

where [Company_ID] = '561'

Got an error message "Invalid Type", which refers to the Geometry column.

Can't say without knowing the types. For example, if Company_ID is a numeric type, then 561 should not be in single quotes: that's a type mismatch.

Anyone curious, who might not have SQL Server, can try it against Nwind:

SELECT * FROM [Products] WHERE [Unit Price] = '18';

Generates an error, because [Unit Price] is numeric.

SELECT * FROM [Products] WHERE [Unit Price] = 18;

Works fine.

rhowitt52 post(s)
#26-Oct-18 21:43

Company_ID is a string field.

As I mentioned above, if I remove the Geometry column from the query it runs correctly. If Geometry is in the query then the error is "Invalid Type"

rhowitt52 post(s)
#26-Oct-18 21:50

My bad. Was using adonetSqlserver, switched to straight Sqlserver and a valid result is returned.

But now I can not create a drawing based on the results. When the query is right clicked there is no Schema listed.

tjhb
10,094 post(s)
#26-Oct-18 21:55

Are you writing a native SQL Server query against your datasource, then trying to create a Manifold drawing from the query result?

Well, can that be done? I don't know.

Maybe you should try a Manifold query against the SQL Server datasource, then making a Manifold drawing from the result?

rhowitt52 post(s)
#26-Oct-18 22:19

yes it is a native query, that is the whole point. With a native query, 1) there is no speed penalty, 2) the data should be directly editable in the sqlserver.

The natvie query returns resuls nearly instantaneous, while the manifold query takes 30 seconds.

Yes, I can create a drawing from the manifold query.

But is this data editable both geometry and column data and will is the data updated back in the SQLserver DB?

tjhb
10,094 post(s)
#26-Oct-18 22:55

But is this data [Manifold drawing created from Manifold query on SQL Server data] editable both geometry and column data and will is the data updated back in the SQLserver DB?

I think that is the idea, yes, but I haven't tested. (You can?)

Whether you should be able to do the same using a Manifold drawing created from an SQL Server query on SQL Server data, I don't know.

Dimitri


7,413 post(s)
#27-Oct-18 06:54

Whether you should be able to do the same using a Manifold drawing created from an SQL Server query on SQL Server data, I don't know.

It sounds like this has gone beyond the specific features of the new build, 168.4, per se, and has become a generic discussion "how do I create Manifold drawings from native DB queries." That should be in a new thread.

There is usually a way to do what you want, but the details matter and this isn't the thread to discover all the details we need to know. For example, if you have geometry in your SQL Server database, is that SQL Server geometry? etc.

adamw


10,447 post(s)
#27-Oct-18 10:08

I understand where you are coming from and I understand what you are after - or at least I think I do. The ultimate purpose is to be able to do filtering on the database, work with the results as a drawing, in read-write mode, and do all that without creating views - correct?

There is a difficulty here.

Let's look at the requirements.

Performing filtering on the database makes absolute sense (the server is faster and will always be faster than the client at reading a million records and filtering them down to a couple of thousands). It also means that the query performing the filtering has to be native to the database.

SQL Server has two types of queries: views and ad-hoc. Our last requirement precludes using views, so the query has to be ad-hoc. Ad-hoc queries enjoy much less support from the side of the database than views though, so there might be some hoops to jump through and the question is whether we can jump through all of them.

On an earlier attempt to use an ad-hoc query we saw the first hoop: geometry values not being recognized as geometry. This was specific to SQL Server and this is now solved (for SQL Server connections, we'll likely make it work for ADO.NET SQL Server connections as well).

There is now a second hoop to jump through: despite what the word 'ad-hoc' means, we can store an ad-hoc query on the database as a Manifold component, however since we don't tell the database that we are storing a query that we are going to run repeatedly (that would be a view), the database does not pre-analyze the query and does not compute its schema. We can know what the schema of the result is when we run the query, but not before. We *can* jump through this particular hoop - say, by caching the last known schema, or by allowing one to create a drawing on an ad-hoc query without knowing its schema and just entering the name of the geometry field / providing other details similarly. (The latter can already be done in the UI - you can copy and paste an existing drawing and just adjust its properties, replacing the name of the source table and the name of the geometry field.)

There is the next hoop: the result of an ad-hoc query lacks index information. When the result table is the product of an ad-hoc (this is important) SELECT * FROM t WHERE ..., you cannot use a spatial index on T to filter it further. You just get a stream of records and you cannot tell the database 'please filter this further with this criteria' - the stream of records does not have a name, there is no way to refer to it. This means we cannot really show the result table of an ad-hoc query right away on a map and have to create a temporary index to do that. We do offer to create a temporary spatial index to show the data automatically, but this is unpleasant because you have to click the button and because it might take some time (not overly long, but still). If SELECT ... was in a view, we would have been able to use an existing spatial index on T to filter it.

There is one more hoop related to the previous one: because the result of an ad-hoc query lacks index information, it is read-only. We want writeback and we cannot have it, because we do not know which table to write to. Now, in this particular place we might be able to do something by specifying manually which table to write to and which fields to use as a primary key -- or possibly by writing fancier queries like SELECT FOR UPDATE, although this has tons of other drawbacks.

There might be more hoops ahead although I think this should be mostly it.

All of the hoops are perhaps solvable. But as you can see, what we basically end up with is having the user writing an ad-hoc query, storing it, and then telling Manifold instead of the database what the query means to do and how to handle it. Honestly, this feels like creating a crutch and doing what database views do, but on the client, because of the requirement to not use views. Could we revisit why it is undesirable to have views? There are obviously reasons, but solutions for them might be easier and cleaner.

adamw


10,447 post(s)
#27-Oct-18 10:28

One more thing:

Yes, I can create a drawing from the manifold query.

But is this data editable both geometry and column data and will is the data updated back in the SQLserver DB?

Yes.

If you use Manifold queries (as long as you structure them properly, but that is not difficult to do), the result is going to be writable. The only problem is that the filtering will be performed on the client, not on the server, filtering on the server is faster.

A simple example.

We connect to SQL Server and run this:

--SQL

CREATE TABLE t5 (id INT IDENTITY (1, 1), a INT, g GEOMETRY);

INSERT INTO t5 (g) VALUES

  (geometry::STGeomFromText('LINESTRING (100 100, 20 180, 180 180)', 0));

ALTER TABLE t5 ADD CONSTRAINT t5_id PRIMARY KEY CLUSTERED (id);

CREATE SPATIAL INDEX t5_g ON t5 (g) WITH (BOUNDING_BOX=(0, 0, 200, 200));

This creates a table with a geometry field.

If you perform the above from a Manifold command window, right-click the data source in the Project pane and click Refresh to let it catch up to the changes - or close and reopen the project, or delete and re-create the data source.

We then create the following query in the MAP file and reference the table we just created on SQL Server:

--SQL

SELECT * FROM [SQL Server]::[dbo.t5] WHERE id=1;

We can then right-click that query and create a drawing based on it. We can open the drawing and it will show filtered records. If you Alt-click an object, you can edit its fields (the A field was added specifically to show that).

Dimitri


7,413 post(s)
#27-Oct-18 06:35

As I mentioned above, if I remove the Geometry column from the query it runs correctly.

Ah, I see. I missed that.

tjhb
10,094 post(s)
#26-Oct-18 21:51

[cross posted; removed]

adamw


10,447 post(s)
#27-Oct-18 09:17

Here is a simple test that I did.

Connected to a SQL Server database. Opened a command window on the SQL Server data source (it opens in native mode by default = running all queries on the database, using SQL syntax of the database, and I left it in that mode).

Ran the following:

--SQL

CREATE TABLE t5 (id INT IDENTITY (1, 1), g GEOMETRY);

INSERT INTO t5 (g) VALUES

  (geometry::STGeomFromText('LINESTRING (100 100, 20 180, 180 180)', 0));

Then ran this:

--SQL

SELECT * FROM t5;

The geometry field came back as geometry.

Could you repeat this test on your system? Does the geometry field come back as geometry in the SELECT?

What is the type of the Geometry field in your original table? What is the version of SQL Server Native Client?

PS: Just saw that this has been resolved already: you used ADO.NET SQL Server data source instead of SQL Server. Will read the whole thread first in the future.

rhowitt52 post(s)
#27-Oct-18 19:26

I think we are all on the same page. My post here was simply my test of what I THOUGHT was included in this release. ( Running a native SQLserver query, get a geom in the results, and being able to convert that to a drawing ). If converting to a drawing is not currently supported, no problem.

Adding the ability to edit data in SQLserver will truly be cutting edge.

rhowitt52 post(s)
#28-Oct-18 13:48

In addition to Manifold , Our company also use GeoServer, an open source web mapping server. GeoServer allows us to publish our clients data via a web interface. The reason I mention GeoServer, is that it also connects to our SQLserver. And this in conjunction with OpenLayers ( javascript library ), provides a fast , efficient and flexible interfaces for presenting data. Manifold, for paper output, data analysis, and generation of vector data, is still our go to tool.

GeoServer does not interrogate the SQLserver database and then provide all of tables, if forces me to pick the tables I want to be available for presentation. By doing this, it collects only the table definitions and SRID needed for the tables I intend to use. And by defining these tables up front it is able to pre-populate the table meta data . Although I have not looked at the internals of the API code, using the OpenLayers' API, the table data can be filtered, so that only the subset of records needed to display on a web page is sent to the client. This filtering of the data happens in seconds not tens of seconds. And if I code the web page correctly both the vector data and tabular data is editable.

I bring the above up, because I think it is a model that addresses many of the points you made about what is needed in order to provide geometry data and make it editable.

Dimitri


7,413 post(s)
#29-Oct-18 08:07

If converting to a drawing is not currently supported

It is supported. You just have to do it correctly. See adamw's discussion.

rhowitt52 post(s)
#29-Oct-18 13:44

Yes the query returns a result showing that g is geom field.

No, I can not create a drawing from the results.

1) the column in my table geom_of_fields ( this table was upload to Manifold for testing ) is a Geometry

2) the system is using Microsoft SQL Server 2008 R2 Native Client

rhowitt52 post(s)
#30-Oct-18 16:26

Today i switch to usinig the Microsoft SQLServer 2012 Native Client. And then reran the native query... Still no joy. The query results could not be turned into drawing.

As per the Manifold documentation, the results have a gray background, meaning there is no index available and there fore no drawing can be created.

tjhb
10,094 post(s)
#31-Oct-18 00:54

As per the Manifold documentation, the results have a gray background, meaning there is no index available and there fore no drawing can be created.

That's not correct.

You can create a drawing provided that there is an RTREE index on a Geom field.

A grey background means that there is no BTREE index. That does not mean that a drawing can't be created, but rather that objects (in table or drawing) can't be selected.

rhowitt52 post(s)
#31-Oct-18 02:00

OK, so why can't a drawing be made from my native query?

The table in question has a spatial index

tjhb
10,094 post(s)
#31-Oct-18 05:43

I understand that you're frustrated, and I'm sorry I'm not experienced enough to help.

I suggest, though, that you re-read Adam's post above concerning all the hoops that the software must jump through to do exactly what you are asking--re-read as many times as is necessary, plus a few extra, bearing in mind that Adam's writing is exceptionally precise and succinct, so an unusually long post from Adam like this really something to take your time over. Don't worry if you don't feel you understand everything after "only" 3 or 4 reads.

Also: try replying to the question in his last paragraph in that post. You might have missed it. Speaking strictly from experience, in trying to draft an answer to Adam's question, you might find that murky things start to become much clearer. Even if that's not the case, you might trigger a great suggestion in return.

[Added] Also, write down and post absolutely all of your steps, with the actual result in each case (and always avoid empty conclusions like "it doesn't work").

adamw


10,447 post(s)
#01-Nov-18 08:33

Like Tim says, I answered above.

The long and short of the answer is: not recognizing geometry as geometry in ad-hoc queries was one of the issues standing in the way of your desired workflow. That particular issue is now solved, but there are several other issues left to solve. We can solve them all if needed, however, since the solutions necessarily involve you as a user to provide additional input regarding the query manually, essentially describing what it does, instead of letting the database figure it out automatically, maybe it is a better idea to leave that to the database and use views. I am asking at the end why specifically do you want to not be using views. Maybe the reason you don't want to be using them has a cleaner solution.

rhowitt52 post(s)
#01-Nov-18 14:28

View are problematic for a couple of reasons:

1) Views are not dynamic. That is a view can not be defined with a "where" clause that has a parameter to be filled in on demand.

2) Because of 1) above , a view has to be defined for every way that one would like to slice up a table into sizable chunks. As an example, a table contains all parcels for all clients, but for editing purposes, only one client's parcels should be shown in a client's project drawing. So a view has to be created for every client.

3) Because of 2) above we are currently managing over 2000 views and this number grows every time a new client is added.

If instead of a view, a query could be used , only a couple of queries would be needed that would manage all of the ways to slice a table.

If there is a better way to manage the editing of a large dataset , i am all ears.

rhowitt52 post(s)
#01-Nov-18 14:51

New side affect of Manifold 9. When creating a command window in Manifold 9, Manifold is creating a an [mfd_meta] in the SQLserver database to which it is connected. Due to all of the views this [mfd_meta} table has had 1000s of items created by Manifold. This first time setup of the meta table is requiring 10s of minutes by Manifold. And what ever query Manifold is doing on the SQL server is pegging the CPU. ( This is related to my original bug, which has been submitted to Manifold , "Select Top 1" from a very large takes a very long time ).

In addition, our Manifold 8 projects, that accesses this database, are now taking 10s of minutes to start.

adamw


10,447 post(s)
#01-Nov-18 15:54

This is a product of the change that we did that caches SRID values for tables and views.

A view (like SELECT TOP 1 ... which you sent and we verified) taking a very long time to run is not related to anything in our code, it is plain a view taking a long time to run, perhaps because there is a second database involved. With the caching code we will only run it once as long as we can write the produced SRID value back to the database.

We will check what's up with 8 taking a long time to connect to the database. If that's a side effect of the change that we implemented, we will of course fix it. Just to be clear, does it take a long time to connect to the database from 8 when 9 is still running or when it is not running as well?

adamw


10,447 post(s)
#01-Nov-18 16:04

You can use a limited number of views and modify them on demand instead of just keeping a view for each client.

The meat of a view can be put into a table-valued function and the view by itself can be just SELECT * FROM <function>(<parameters>), so that its text is easy to modify.

You can actually modify a view by running a query in 9: EXECUTE [[ ALTER VIEW ... ]] ON <datasource>. This way there are minimal changes to the workflow.

Instead of:

open 9, open the MAP file with a data source for the database, open a query that retrieves data from the database, adjust query text to do the filtering you currently want, then open a drawing linked to the query, do Refresh on the drawing to re-fetch the data with adjusted filtering, then add / change / delete objects in the drawing,

we:

open 9, open the MAP file with a data source for the database, open a query that does EXECUTE / ALTER VIEW on the database, adjust query text to do the filtering you currently want, run the query, then open a drawing linked to the query, then add / change / delete objects in the drawing.

There are currently two issues with the latter sequence, both related to SQL Server, one of them requires a fix on our side (which we already implemented a couple of days ago), but other than that, the second workflow seems usable, no?

I will try to provide a specific example tomorrow. (I'll also respond to your last post which I haven't yet seen, we cross-posted.)

rhowitt52 post(s)
#01-Nov-18 19:30

Is there is one master view that get altered?

How does this work with multiple users that want to edit different clients?

adamw


10,447 post(s)
#02-Nov-18 06:41

One master view per user. Or several views per user, but a limited amount.

Say, I am a user. I log in and I want to work with client X. I adjust my view on the database to show data for client X. After some time I am done with client X and want to now work with client Y. I adjust my view to show data for client Y. If I want to look at client Z in the middle of my work with client Y, I adjust my second view (set up a new one if I don't have any) and work with client Y through view 1 and with client Z through view 2.

When you log in as a different user, you are working with your own views, our views are separate.

Views themselves can be organized like this:

--SQL for SQL Server

 

-- parameterized view

CREATE FUNCTION filter_func(@client NVARCHAR(50))

RETURNS TABLE AS

RETURN (SELECT * FROM "data" WHERE client_id = @client);

 

-- specific view, currently selects data for 'bob'

CREATE VIEW filter_adamw_1 AS SELECT * FROM filter_func('bob');

All of the adjusting can be done from 9:

--SQL9

EXECUTE [[

  ALTER VIEW filter_adamw_1 AS SELECT * FROM filter_func('joe');

]] ON [SQL Server];

As I said, the build you currently have has an issue (views on SQL Server appear readonly even when they are updatable), but we already have a fix for that.

rhowitt52 post(s)
#01-Nov-18 15:59

So here is an example of the type of system I want to Manifold 9 to help manage, especially in terms of editing Geometry data, but also where manifold could help when adding a new client to the database.

CREATETABLE [dbo].[Geom_of_Parcels](

[OID] [int] IDENTITY(1,1) NOT NULL,

[Version] [int] NULL,

[Year] [int] NULL,

[ParcelName] [nchar](50) NULL,

[Client_ID] [nchar](10) NULL,

[PrintOrder] [int] NULL,

[Geometry] [geometry] NULL,

[Attr_a][nchar](10) NULL,

[Attr_b] [nchar](10) NULL,

[Attr_c] [nchar](10) NULL,

[Attr_d] [nchar](10) NULL,

[Active] [char](1) NULL,

[AcresFromMap] [float] NULL,

[AcresManuallyEntered] [float] NULL,

[editDate] [date] NULL,

[EditedBy] [int] NULL,

[WKT_temp] [varchar](max) NULL,

CONSTRAINT [PK_Geom_of_PARCELS3] PRIMARY KEY CLUSTERED

(

[OID] ASC

)WITH (PAD_INDEX= OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

)ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

CREATETABLE [dbo].[Geom_Of_SubParcels](

[OID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,

[Version] [int] NULL,

[ParcelName] [char](50) NULL,

[SubSection] [char](50) NULL,

[Client _ID] [char](10) NULL,

[Year] [int] NULL,

[Geometry] [geometry] NULL,

[Active] [char](1) NULL,

[EditedBy] [nchar](40) NULL,

[EditDate] [date] NULL,

[AcresFromMap] [float] NULL,

[AcresManuallyEntered] [float] NULL,

[Attr_a][nchar](10) NULL,

[Attr_b] [nchar](10) NULL,

[Attr_c] [nchar](10) NULL,

[Attr_d] [nchar](10) NULL,

[LabValue1] [smallint] NULL,

[LabValue2] [float] NULL,

[LabValue3] [char](5) NULL,

[LabValue4] [smallint] NULL,

[LabValue5] [float] NULL,

[LabValue6] [float] NULL,

[DateOfReport] [datetime] NULL,

[geometry_temp] [varchar](max) NULL,

CONSTRAINT[PK_Geom_Of_Sections1] PRIMARY KEY CLUSTERED

(

[OID] ASC

)WITH (PAD_INDEX= OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

)ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

These two table will track hundreds of clients who each have anywhere from 1 to 500 ( or more) parcels.And each of the parcels can have anywhere from 1 to 50 ( or more ) subparcels.In addition to these two tables in SQLserver there are many more tables that track all manner of tabular data about the client.Most of this data is available via a Software management system that allows multiple people to have access to this data, all at the same time. ( I.E. This is not a standalone database sitting on one desktop computer )

Because there may be Tens of thousands of parcel and subparcel records, a starting set of capabilities, in Manifold, that would be of great use are:

  • 1)be able to pull in just one Client/Year set of Records via a Query.
  • 2)Be able to edit the geometry data.
  • 3)Be able to edit some of the tabular data.
  • 4)Create new entries in the table for the current client/year
  • 5)This data should be available within seconds, not tens of seconds.
<!--[if !supportLists]-->

There are other functions that Manifold could probable perform once this basic set of capabilities was in place (e.g. Do a join query with another dataset and based on those results update the Parcel table ).

From a programming standpoint I am not sure why you are using the model you are. That is, why are you storing all of the metadata back to SQLserver.

If one wants to write a query about the Parcel data, why is Manifold not maintaining an object, within Manifold, that has the source of the data, the list of geometry fields, SRIDs, list of indexes, etc. All of this data can be queried from SQLserver. This would allow Manifold to have only the data it needs to manage the set of tables that the end user has specified are needed within the current project.

If you want me to continue with filling in my wants for a GIS data management system I can continue....

adamw


10,447 post(s)
#02-Nov-18 10:32

Thanks for the explanation.

All of your requirements are doable.

The most future-proof approach is, I think, this:

Use SQL Server views. Have one or more views for each user selecting data for the client / year combinations with which that user has to currently work. Work with drawings based on these views (they will be automatically created for geometry fields). Can edit geometry, edit attributes, etc. See my posts above for details.

There is an alternative approach:

Instead of SQL Server views, use queries in 9. The stickiest point is having the filtering be performed on the database, to avoid transporting a million records to the client and having the client perform filtering locally. But if the filtering is just ... WHERE field1=... [AND field2=...], we can generally offload it completely to the database, as long as we can detect that the database will be able to do it fast. That is, even though you only have a table on the database, with indexes, but without any views, and a query in 9 that works with that table, 9 will compose and send a query to the database that will engage the indexes and get all the benefits of on-database filtering that a view would have.

For this to work well, 9 has some requirements from the database. But they are nothing too unusual and they are being relaxed all the time as our query engine gets smarter and learns how to get more out of each individual database that we support. Right now, on SQL Server, the best scenario is when the table has a single field, with no nulls allowed, with a clustered index on it, and the filtering is done on that field. Your scenario above (a) filters on two fields instead of one (Client ID, Year), and (b) has clustered primary key on a different field (OID). However, (a) can be solved by having a single combined field with text for both Client ID and Year. And if you can then make *that* your clustered primary key, you are all set: you can use no views, just queries in 9 and WHERE on that combined field will go straight to the database and the database will be fast to serve it. Or, if most of the filtering is on the ClientID and not on Year (say, there are 1000 clients and only 3-4 years per client on average), you can keep the fields separate, make a clustered index on just ClientID and then when you use WHERE ClientID=... AND Year=... in a query in 9, the part for ClientID=... will go to the database, and the part with Year=... will stay on the client, but the performance will still be fine because the database still does most of the filtering.

The main benefit of the first approach (views) compared to the second (queries in 9) is that if the filtering criteria suddenly has to be complex (for example, if you'd want to join a secondary table and look into its values to determine whether you want to return a record or not), views will keep all of that work on the database, while queries on the client might only be able to keep part of it there. This might or might not be important for the performance, depending on the data. Everything else is mostly better with queries on the client.

rhowitt52 post(s)
#02-Nov-18 13:18

Thanks for the road map of how to possibly implement a 9 solution. I will experiment with your suggestions.

rhowitt52 post(s)
#05-Nov-18 03:32

Started to work through your road map.

A note to Manifold Developers: it would be helpful in the Query Log if it included the execution time of each query. Although the query shows execution time while it is running, it is lost once the query is done. Execution time is value information for determining the efficiency of different queries.

In SqlServer the following three items were defined:

 

CREATE TABLE

[dbo].[GIS_M9Testing](

    [ClientID] [nchar](10) NOT NULL,

    [Year] [int] NOT NULL,

    [Parcel] [nchar](50) NOT NULL,

    [Geometry] [geometry] NULL,

    [Geometry_Temp] [varchar](max) NULL,

    [OID] [int] IDENTITY(1,1) NOT NULL

ON

[PRIMARY] TEXTIMAGE_ON [PRIMARY]

 

- defined a cluster index on ClientID

- [Geometry] has a spatial index

- OID is the identity column that drawings need

- populated

this table with data from an existing table.

 

CREATE FUNCTION

filter_func( @client nvarchar(50))

returns table

as

return

(Select * from GISM9Testing

   

    where ClientID = @client );

 

CREATE VIEW filter_rickh_1

as Select * from filter_func('100');

 

Then in

Manifold a data source and two queries were created:

 

    Datasouce to access the SQLserver

 

UpdateView

Query

 

-- $manifold$

--

-- Auto-generated

EXECUTE [[

   

ALTER VIEW filter_rickh_1 as Select * from filter_func('100');

]]

on [AZSQL1_GISForManifold]

 

 

Select Data from

SQLserverView Query

 

-- $manifold$

--

-- Auto-generated

SELECT * FROM

[AZSQL1_GISForManifold]::[dbo.filter_rickh_1]

 

The query returned results with a schema which showed an rtree index on the geometery column. These results could be turned into a drawing. However, neither the tabular results or the drawing were editable.

I also tried a Manifold query of the SQLserver table with a where clause on the field that has a clustered index. This was done to test: how fast the query was, were the results editable and could the results be turned into a drawing.

-- $manifold$

--

-- Auto-generated

SELECT * FROM

[AZSQL1_GISForManifold]::[dbo.GISM9Testing]

where StringTrim(company_ID, ' ') =

'561';

This query executed rather quickly, but the tabular results were not editable. In addition, although the results did have a schema and the results could be turned into a drawing ( which had a red exclamation point icon), the schema did not show an rtree on the geometry column even though the schema showed the column as a geom and I was unable to display any of the polygons.

I am not sure where to go from here.

adamw


10,447 post(s)
#05-Nov-18 07:29

As I said in an earlier post, the build you currently have has an issue (views on SQL Server appear readonly even when they are updatable). That's perhaps why the view seems to work fine, except it is non-editable.

We already have a fix for that. The fix will be in the next build. We currently plan to issue the next build at the end of this week, but if you contact tech support, we can provide you with an early version of it which will contain the fix - this will allow us to move further.

Also, regarding this:

- defined a cluster index on ClientID

- [Geometry] has a spatial index

could you specify the exact commands you used?

You say later that if you run a Manifold query (SELECT * with filtering in WHERE) on the table, then the schema of the result table does not show an r-tree index on the geom. Does the schema of the original table on the server show an r-tree index? Does it appear if you refresh the data source (if you add a spatial index to a table outside of our UI or using SQL native to SQL Server, we won't know that you added it until you refresh the table / data source)?

Last, we do display execution times for queries, they are in the log window (View - Panes - Log Window). Maybe we should display them in the log specific for the command window as well.

rhowitt52 post(s)
#05-Nov-18 19:49

Looking deeper into SQL Server, here are some limitations per Microsoft:

Clustered indexstructure overview. In a Clusteredtable, a SQL Server clustered indexis used to store the data rows sorted based on the clustered indexkey values.SQL Serverallows us to create only one Clustered indexper each table, as the data can be sorted in the table using one order criteria.

Creates a spatial indexon a specified table and column in SQL Server. An indexcan be created before there is data in the table. Indexescan be created on tables or views in another database by specifying a qualified database name. Spatial indexesrequire the table to have a clustered primary key.

This says to have spatial indexing, there has to be a primary key. The primary key has to be unique and will be THE Clustered Index in the database. SQL server will not allow another Clustered Index.

So one of the solutions proposed above, defining a clustered index only on ClientID or even ClientIDYear, so that the table could be quickly filtered by a Manifold Query will not work. Because those columns alone will not produce a unique identifier. (item 'a' below )

For this to work well, 9 has some requirements from the database. But they are nothing too unusual and they are being relaxed all the time as our query engine gets smarter and learns how to get more out of each individual database that we support. Right now, on SQL Server, the best scenario is when the table has a single field, with no nulls allowed, with a clustered index on it, and the filtering is done on that field. Your scenario above (a) filters on two fields instead of one (Client ID, Year), and (b) has clustered primary key on a different field (OID). However, (a) can be solved by having a single combined field with text for both Client ID and Year. And if you can then make *that* your clustered primary key, you are all set: you can use no views, just queries in 9 and WHERE on that combined field will go straight to the database and the database will be fast to serve it. Or, if most of the filtering is on the ClientID and not on Year (say, there are 1000 clients and only 3-4 years per client on average), you can keep the fields separate, make a clustered index on just ClientID and then when you use WHERE ClientID=... AND Year=... in a query in 9, the part for ClientID=... will go to the database, and the part with Year=... will stay on the client, but the performance will still be fine because the database still does most of the filtering.

So that leaves us with just “Programmable Views” by user. With each GIS user having a set of programmable views.

While programmable views could work, at this point it is not an ideal solution. Currently every one of our clients has a manifold project. Each client has their own project because each client has a lot of unique geospatial infrastructure that is mapped with in the project. This was done because it was not possible to keep all this data in SQL server tables, without building out Views for each of the infrastructure layers. ( we track in the neighborhood of 20-30 infrastructure layers ).

Switching to programmable views means either: that in a clients project the "parcel layer" will not look like the correct set of parcels until the programmable view is updated to the current client or

We move all infrastructure data to SQL server spatial tables and then make only one Manifold Project that switches the view for each layer to the current client. ( This is interesting, but for use to implement would require a large effort to move all of the data ).

Parameter driven queries that can determine their schema ultimately provide the most flexible interface. I know that that puts a lot of work in Manifolds lap to make that work; but at the end of the day you would have one awesomely powerful GIS engine.

adamw


10,447 post(s)
#06-Nov-18 07:03

Fair enough. We realize that clustered is a highly contested position between indexes, and that frequently what you want to filter on does not have the luxury of being clustered. We do get some mileage out of non-clustered indexes on SQL Server as well, it is just that we can currently get more out of a clustered index, SQL Server optimizes much heavier if the index it hits is clustered. We have some ideas on how we can get more out of both types of indexes though - and not just on SQL Server - so, stay tuned.

rhowitt52 post(s)
#09-Nov-18 17:21

Changing subject slightly...

We have something like a few hundred M8 project files, we have been unable to open any of these projects successfully in M9.

Is that to be expected at this point( i.e. M9 is not fully compatible with M8 ) or is there a way that M8 projects can be included in an M9 project that works today?

tonyw
736 post(s)
#09-Nov-18 17:34

is there a way that M8 projects can be included in an M9 project that works today?

With your project open in M9 try File > Import then navigate to and import your Manifold 8 .map file. Importing the M8 project brings in all the components from the M8 project. All the thematic formatting from M8 is preserved too which was a pleasant surprise.

In my case, I started my new map project in M9 but needed components from a previous map project in M8. My intention wasn't to carry on with the M8 project in M9, only to re-use some of the components.

tjhb
10,094 post(s)
#09-Nov-18 17:37

That is completely abnormal. All Manifold 8 files should open without problem in Manifold 9 (with automatic conversion).

Something is up at your end.

What happens when you try to open one of these files in 9? Any message?

Can you open the same file(s) in 8?

Dimitri


7,413 post(s)
#10-Nov-18 15:41

Changing subject slightly...

Please start a new thread for a new subject.

adamw


10,447 post(s)
#11-Dec-18 13:53

A follow up (on "stay tuned").

With 9.0.168.6, you can use the following workflow:

(To recap, we have a database with a big table with geometry data, split by ClientID. We want to be able to work with data for a particular client as a drawing, to do both reads and writes, and to avoid creating views on the database.)

1. Create a new MAP file. Link the SQL Server database as a data source.

2. Create a Manifold query that selects data from the table for a particular client.

Eg: SELECT * FROM [sqlserver]::[data] WHERE [clientid]=1025;

With optimizations to the query engine made in 168.6, Manifold takes this query and sends the WHERE part basically unmodified to the database. The database then can use the index it has on ClientID to produce the data, the index makes sure the database does this fast (does not have to inspect all records in the table).

3. Add a call to TableCache / TableCacheIndexGeoms to cache the result of the query in memory.

You can either modify the query you already have: TABLE CALL TableCache( (SELECT ...), TRUE);

...or add a second query: TABLE CALL TableCache([query1], TRUE);

If the result table of SELECT ... already has an RTREE index on the geometry field you want to use for a drawing, use TableCache, otherwise use TableCacheIndexGeoms.

4. Create a drawing based on the query with TableCache.

Done.

Now you can see the objects for the specific client ID and can edit it seamlessly in the drawing window - or in a table window if you want. When you first open the drawing, the query will fetch all objects for the specified client ID into memory, and that might take some time, but attempting to render the entire drawing would do the exact same thing anyway, so caching will nearly certainly reduce the amount of transferred data, not increase it. If you change data for the client you are working not through the drawing - say, by running an UPDATE query in 9, or even outside of 9 - you can refresh the drawing by right-clicking the drawing layer at the bottom and selecting Refresh to pick up the changes.

(It just occurred to me that it would be better to post the above in the thread for 168.6. If you post a reply, post it there.)

adamw


10,447 post(s)
#15-Nov-18 07:35

Status update:

We are planning to issue 9.0.168.5 next week. There are significant extensions and additions to styles, big improvements to working with data stored on databases (particularly SQL Server), additions and fixes to dataports, etc. Legends are coming in the build after that, we are working on them in parallel.

LeRepère
153 post(s)
#26-Nov-18 13:36

Without wanting to put undue pressure, is it possible to know the new status update?

Dimitri


7,413 post(s)
#27-Nov-18 05:50

Underway. Stay tuned!

adamw


10,447 post(s)
#27-Nov-18 17:20

(replied to wrong post, moved)

adamw


10,447 post(s)
#27-Nov-18 06:52

Yes, apologies. We are looking at issuing the build later this week. Tying a lot of ends.

adamw


10,447 post(s)
#27-Nov-18 17:21

Also, since the original version of 9.0.168.4 will stop working on December 1 and we are already pretty close to that date, we are going to provide an extended version of 9.0.168.4 which will continue working past it. Both because "this week" technically continues into December and as a convenience to those who are using the cutting edge build and don't want to have to update in a hurry when it stops working. This will happen tomorrow.

adamw


10,447 post(s)
#03-Dec-18 07:21

Time for one more update, hopefully the last one of its kind.

We need a little more time to finish the build. We did a lot of work with styles and after how much we did, it just seems wrong to leave some of the features that we added unpolished or unfinished. We will take a few days to do this. For the build date, we are tentatively looking at Wednesday or Thursday, this week. We'll also have many fixes and small additions in areas other than styles.

Apologies for the delay.

Dimitri


7,413 post(s)
#03-Dec-18 18:09

It's worth some extra effort. Many conveniences, which together make it quicker and easier to produce appealing displays. I did the below in what seemed to be seconds, but probably took more than that, a few minutes (click on it to make it bigger). The lines are dynamically created in a computed field, and the distances in miles from the centroid to the "doctor" icons are likewise dynamically computed in a computed field, and then used in labels.

Attachments:
calaveras._color_labels.jpg

tjhb
10,094 post(s)
#04-Dec-18 00:45

That is a nice example--provided that everyone in Calaveras County can (afford to) get to the doctor by helicopter.

A better GIS example--which might also show some current major deficiencies in 9--would be to show relative distances to medical treatment by road.

(Road travel times would be even better.)

Try it.

artlembo


3,400 post(s)
#04-Dec-18 03:28

yes. 8 has the ability to create drive-time zones (isochrones). Honestly, that's the way to go for most of these kinds of applications - especially important for me, as I live on a peninsula, and the Chesapeake Bay is only a few miles wide in some spots, yet a bridge across the Bay might be 80 miles away.

Dimitri


7,413 post(s)
#04-Dec-18 06:20

(Road travel times would be even better.)

No, since that wasn't the task. Ah, Internet... amazing how universal the impulse is to ignore what we don't know in favor of what we think should be. :-) I do it too, so I'm not pointing fingers.

[Lucky that doctors don't work that way, or if somebody came in complaining about a cough the doctor would reply "Splendid! Let's amputate that foot right now!" ]

In this particular case, the request was to generate a series of straight line distances between the centroid of a county and the given locations of medical offices. The person making the request seemed to be alert, so I managed to repress my immediate impulse to lecture him that he shouldn't have asked what he asked, but instead should have asked about drive times to the closest BBQ restaurants.

It's baffling, of course, why anyone would care about doctors more than BBQ restaurants, but, well... you know how it is. There's one in every crowd.

There are several scenarios that come to mind where straight line distances might makes sense:

a) straight line wireless distances from a central antenna to medical facilities, for example, for emergency communications systems or direct, private Internet bridges for durable, remote consultations.

b) collection of samples by drone from doctors to a central lab or dispatch of needed medicines by drone. In an ideal setting one might imagine that the distance to be flown should be less than the range of the drone, etc.

c) the distance from local medics by helicopter to the regional trauma center.

d) doctors reaching the end of their rope dealing with health care bureaucracy and being unable to continue without an air delivery of BBQ food from the county center, where a BBQ smoker is kept running 24/7.

e) quick and dirty subsetting or assignments of doctors to regional centers in regions where road networks are reasonably dense, where it is more important to quickly do the task, without any delays gathering and debugging road networks, than it is to spend a lot of time and computation doing routing that makes zero difference in the result and that absolutely everyone will ignore in real life anyway.

This task wasn't about routing, but my own view on routing is that it often is a classic GIS mistake to do it locally on a desktop. The reason is that routing is so often dependent upon real-time traffic, such as jams and accidents, that the route generated on the desktop without real-time data on traffic is highly misleading. Routing servers like Google are much better in such cases.

tjhb
10,094 post(s)
#04-Dec-18 07:05

Good answer... though I can't see what post you were replying to. Thread is too long (and not focussed). But fair enough. A bad question was asked somewhere in here. Bad answer required.

But it is no answer for Manifold 9. "Google it" is just stupid. We obviously need base graph functions--this is GIS--and almost certainly the routing tools that would be built on them.

Alternatively--this would be almost as good--we would need a clear undertaking that routing tools would not be built in to Manifold 9 in the foreseeable future. Then one of us could invest the time to write them, and sell them back to this community for a very modest sum.

Dimitri


7,413 post(s)
#04-Dec-18 08:59

But it is no answer for Manifold 9.

Who said it was? The emergence of fast, convenient, and extensive styling tools, used in an example of solving somebody's requested task, is a good thing for Style. That's all. It is not a manifesto about routing.

"Google it" is just stupid.

Hmm... well, I didn't write that, but since you have, I respectfully disagree. It turns out that using Google as a routing engine in most cases of real life routing isn't at all stupid. On the contrary, it works much better than desktop routing in most cases (not all, of course, but most...) where routing is used in the real world.

If you disagree, try a real-world example as routing is used by billions of people around the world. John is a taxi driver for Uber who needs to get from one side of Kansas City to an address on the other side. If he is smart, he uses an online routing engine that takes real time traffic into account. That works far, far better in most urban settings in most real life situations than offline desktop routing.

One might say, "well, I don't mean routing as it is used by billions of people, I mean routing as it is used by a handful of guys doing drive time zones in GIS." Well, OK, let's look at that. Given the technology of an earlier day when people had to do the best they could without real life data, drive time zones that didn't reckon the messy reality of real life are OK. These days, you can do better by leveraging the big data that Google and others have: it's not just one set of drive time zones, it is several sets of drive time zones for different times of the day and night.

There are certainly ways of dealing with that on a desktop. For example, you can have multiple speed limits on roads for different times of day and re-do the drive time computation using different sets of speed limits. But the reality is that few drive time zone analyses as most people do them will reckon such effects, because very few people take the time to get such data, etc.

That such real time traffic factors have a big influence on road network analytics can be seen from routine in-car navigation even if our ultimate task is drive time zones. If you compare offline with online routing you can easily see it.

I like to combine my hobby of traveling to new places with my interest in spatial software. When my wife and I travel somewhere, or when we have friends or family visit, we often take extensive car trips to interesting spots for tourism. Some of those trips have involved truly epic segments where poor routing can add hours to a trip. So I always travel with online and offline navigation options.

I prefer MapFactor Navigator for in-car offline navigation. It uses OSM streetmaps for routing. On a typical trip I'll have MapFactor and all of the country maps for the region on my telephone, plus on a tablet for backup. I'll also buy local SIM cards with good, temporary data plans, for the country, both for my telephone and for the tablet. Where there is good Internet through the phone network we can use online routing, but when that cuts out we have offline routing running.

Mobile Internet can be surprisingly iffy even in developed countries. On a recent car trip through Northern California, using SIM cards from T-Mobile (not the best provider, I grant, but still...) we lost mobile Internet at least 30% of the time, even when sticking to major highways like I-5, cal 99 and so on. That was a significant factor trying to route around the smoke and keeping a good distance from Chico and Paradise, the very active wildfire zone. In Europe it's the same deal, where you often lose mobile Internet even on the Autobahn through Germany.

On the road, I'll have my telephone mounted on a suction cup mount in front of me running MapFactor offline, while my wife navigates using Google on the tablet. That gives a great comparison between a very good, auto/truck-oriented offline routing engine, and what Google generates on the fly in real time.

In urban situations it can be amazing what a huge difference real time traffic data has on the routing solution delivered by Google. On any given day around Paris, for example, you're nuts to use an offline solution, which can add hours to a routine trip to the airport compared to using Google. That's true for any bigger city.

In other cases, the road network used by MapFactor just is not complete. As anyone who has routed on the desktop knows, you are at the mercy of your data quality. If your road network has errors, gaps, etc., in it, your routing solution will be wrong, compared to the "perfect" solution. The hard part of routing is getting good, updated road networks. It's not the routing algorithms.

So no, it's not stupid to use Google for routing. It can be very much smarter, overwhelmingly smarter, than doing offline routing. It all depends on what you are doing.

As for this being GIS and requiring graph functions, well, the case for that is not as strong as it used to be, given continuing advances in technology. It's true that given that Manifold already has a truckload of graph functions on the shelf those are the sorts of things that will get added automatically. It's not like anybody will ever say "oh, no need for those in 9." Manifold likes to do such things, if for no other reason to provide a marketing comparison against ESRI and others.

But the reality of those graph functions is that very few end users ever use them. Especially in modern times, people don't like to have to get road networks and then maintain road networks. They just want to push a button and get a route from Google. Are they bad people for wanting that? Nope. Nothing wrong with that. Manifold should make that an easy option for them.

It's like base maps: as web servers have become easier to use, more and more people just use a web server layer (like I did, using the Canvas dark ArcGIS REST server) instead of cobbling up their own base maps. Does Manifold make that easy? Sure. Does Manifold also provide facilities for people to create their own base maps if that's what they want, importing from various sources, styling the results and so on? Of course. The two complement each other.

It's the same thing with routing or graph functions over road networks. Having options for online or offline routing complement each other. But, like anything else, in what order such things get added has to be based on the priority that people assign to them.

KlausDE

6,410 post(s)
#04-Dec-18 10:06

Google routes through a net of roads. Roads only. As building the net and reporting current transport rates is most of the afford it would be stupid to not use google for this special service.

But a general GIS tool for routing would accept every sort of net, sewage pipes or rivers and reservoirs for water, internet cable and nodes for messages, branched bird routes on their way from Africa to Europe through the air space they share with airplanes in Israel ...

Manifold TMO should be the GIS tool for questions that are not jet put. OK, difficult to sharpen a feature request for future tasks. Especially where big data may be involved.

But in addition to Mfd8 I'd like to have one-ways and reservoirs/sinks with capacity. I would prefer other tools for the design of a motherboard. But it would be nice to have the basic elements at hand for a universal routing tool.

But that's ahead of 9.0.168.4 and that's OK. And it's of minor priority as few users are confronted with this sort of questions. A hint for the road map of development would be nice.


Do you really want to ruin economy only to save the planet?

tjhb
10,094 post(s)
#04-Dec-18 07:29

There are other examples.

Should we hold our breath for CUDA raster functions, for perhaps another year, or should someone dare to invest a few weeks or a month but risk being trumped by internals?

Hydrology?

No plan at all. No marketing whatsoever.

Mike Pelletier

2,122 post(s)
#04-Dec-18 15:17

The raster functions have been hanging out there for a long time. I've shifted gears and now hope that the development focus is making core mapping functions exceed Mfd8. Core meaning cartography, labeling, vector editing, printing, and exporting. Admittedly my job lately has allowed delaying heavy duty GIS analysis work.

The software relies on many keystrokes and so it is not easy to use unless you do it day to day. There isn't much motivation to use it for mapping though if it isn't better than what I already have. Of course the competition doesn't seem to be that much better or at least have the potential Mfd9 has.

There is a long list of GIS functions we all want but having the mapping side better than Mfd8 by the end of the year will go along way I suspect for many people.

Dimitri


7,413 post(s)
#04-Dec-18 18:28

and now hope that the development focus is making core mapping functions exceed Mfd8. Core meaning cartography, labeling, vector editing, printing, and exporting.

Yes, that's exactly the focus. There's been a massive amount of work on core cartography, as in graphics for areas, lines, points, labels. It now greatly exceeds 8, as you'll see in the next public cutting edge build.

It adds up quickly, with hundreds of new parameters now being managed by the new system. It's not just way more options for styles, it is also far better accuracy and rendition than 8, in terms of things like 10 points really being 10 points in a much more consistent fashion regardless of screen DPI or printer.

Legends are a part of that and come next, with significant legend functionality before the end of the year.

After that come rasters, servers and geometry. Rasters are obvious and, in a way, easy. Knock down a few dozen of the top desires for rasters and you get a lot of happiness. By "servers" I mean things like the web server, so you can publish to the web effortlessly using Manifold. By "geometry" I mean filling in a long list of needed utility functions and GIS capabilities, including those vector editing capabilities not by then done as part of the usual mix of bigger things and smaller things in builds.

All the above is not a particularly long program, certainly nowhere near a whole year. After that there is a long laundry list, yet more cartography, yet more in many other area, such as analytics, routing, development, etc.

Keep in mind that along the way there will be "small things" in every cutting edge build, which really add up in terms of overall greater convenience and smoothness, or which fill in small parts for things like managing printers in layouts, etc. A good example is the "eyedropper" color picker, a small detail, true, but it has totally revolutionized for me the ability to copy cool cartography I've seen on the web, in web servers like Mapbox displays, and so on. I just did a copy of a famous Mapbox example that uses exactly the same color ramps, courtesy of the eyedropper picker. :-)

tjhb
10,094 post(s)
#05-Dec-18 05:38

That is actually a really good answer.

It provides a useful updated roadmap.

Thanks Dimitri. Very helpful.

Mike Pelletier

2,122 post(s)
#05-Dec-18 15:43

Like it!

Do hope that exporting layout to PDF and/or improved Page Setup for printing finds its way toward the top of the list, so we can take advantage of all the mapping goodies

Dimitri


7,413 post(s)
#05-Dec-18 18:34

exporting layout to PDF

? You already have that, don't you? Just print to PDF using any one of many PDF printers.

Mike Pelletier

2,122 post(s)
#05-Dec-18 18:51

Yes, but it isn't working for printing a layout on large paper (36"x48") that contains Google images. Please try the layout in the .map located in this thread. I get a "no memory" error. I'm using CutePDF Writer. When the CutePDF Writer was set to 8.5x11 I did get a 2 MB PDF file that covered just the upper right corner of the layout. Also, I get a "no memory" when trying to print to our plotter with the large paper size. In order to get the Page Setup dialogue to keep 36"x48", I had to set our plotter as my default printer and set the default paper size to 36"x48". Thanks.

KlausDE

6,410 post(s)
#05-Dec-18 21:07

We need to be able to determine the raster resolution for the printer. The Mfd8 export allows that. The hairline problem for vectors is solved in Mfd9. But we need to control the resolution for images, especially for images from servers. Some of them use different image sources for different zoom ranges and are optimized for the monitor. We need control.

There is not much use for resolutions higher than 600 DPI, 300 DPI is more than you need in most cases.


Do you really want to ruin economy only to save the planet?

adamw


10,447 post(s)
#11-Dec-18 13:19

This is not in 168.6, but we are working to fix both of the following issues:

1. Images from web data sources like Google print at a way too detailed level to be readable (letters and shapes end up being too small).

2. Images in general print at maximum resolution available for the print job.

The second item is not technically a bug, but it is still an important usability issue. When you print a page at 600 DPI, you frequently don't want rasters to use the full 600 DPI, that increases the size of the print job dramatically, making printing slow, and worse, at some point the print driver will just reject the job - frequently after spending a lot of time trying to process it. Like Klaus says, we need a way to specify raster resolution separately. We will do that.

Dimitri


7,413 post(s)
#06-Dec-18 06:05

Yes, but it isn't working for printing a layout on large paper (36"x48") that contains Google images.

Ah, that's a very different thing than this...

exporting layout to PDF

9 exports layouts to PDF, and usually does so very well, much better than 8. If it does not do what it is documented to do, that's time for a bug report.

Keeping in mind the discussion in the File - Print topic, it can be a challenge sometimes to decide if a particular issue is one of those problems arising from PDF packages or printer drivers that the topic explicitly warns about, or if it is a problem arising from a bug in Manifold. Almost always it is an issue in the printer driver, Windows or the PDF "printer" package, but that's OK. The issue still should be reported as a possible bug.

If there is a new capability you'd like, that's a Suggestion, not a bug report. There, too, if what you want is a repair to a bug in the printer software you are using, as noted in discussion such as...

Caution:Some printers and print-to-PDF packages will ignore Windows Control Panel settings for that printer's paper size, and also will ignore paper size as specified in Page Setup. Instead, they will insist on using some default paper size, such as Letter, even if we want to print A4or vice versa.

...that might not be in Manifold's power to grant. If Manifold didn't write the printer driver there are limits to Manifold's ability to fix bugs in other people's software. Sometimes there are acrobatic moves, jumping backwards through hoops, that Manifold might be able to do to avoid triggering bugs in other people's software. But undertaking such acrobatics when the bugs are not Microsoft bugs usually are a lower priority.

In any event, please phrase such issues as what they are: a possible bug in an existing capability, or, a new refinement you'd like added to an existing capability. Phrasing it as "I need this, which doesn't exist" as a way of giving a desire greater heft just adds noise and doesn't help.

---

One more thing:

In the case of PDF the integration issue is challenging: PDF is an extremely messy thing, as can easily be seen by how different print to PDF packages create radically different PDFs, and by how identically the same PDF can be interpreted in radically different ways by different PDF display packages. If PDF as a standard was not so Rube Goldberg, and if writing/reading packages were not also so unreliable, you wouldn't have such extreme nuttiness in the variety of results and interpretation of those results.

Add to that the extreme mess of how printer drivers work in Windows and it can be very difficult to sort out the cause for any particular anomaly. It is often the case that there are multiple causes, such as a variety of bugs that are simultaneously active in Manifold, in the print to PDF package, in Windows and in the PDF display package. It's also true that most PDF writing packages have little or no experience handling big data. There are plenty that are 32-bit only, where clearly they have no experience of being fed bitmaps that are gigabytes in size. Try them with Manifold and you find all the bugs in the package that printing from Notepad to PDF does not trigger.

It is also true in something as loose and as messy as PDF that anomalies might not even be something you could call a bug, but instead are an example of a gray area in PDF which is open to interpretation, with different packages interpreting what should be done in different ways. When all the stars align you get a viable PDF. When there are legitimate differences of opinion that do not align, you don't get a viable PDF.

Note that none of the above is in any way dodging the need for Manifold to be able to print to PDF and to print to a reasonable variety of printers and plotters. It's just pointing out the need to keep the eye on a pragmatic approach to resolving anomalies.

Anyway... step 1 with any problem when something in Manifold does not do what the documentation says it should is to file a bug report.

Dimitri


7,413 post(s)
#06-Dec-18 09:00

Should add, "export" to PDF in 9 is "printing" to PDF, with both the "exporting" and "printing" words being used in a somewhat misleading way, in that they are scrunching the square peg of what actually happens into the round hole of conventional terms that people want to use by analogy to what printers do or what exporting to data storage file formats does.

A PDF file is not really either a printed page, or an exported encapsulation of pixel or other data. It is a program in the PDF programming language. When correctly executed by software that is capable of running programs in the PDF programming language, the visual result should be what is desired. So really what is going on is neither a "print to PDF" nor is it an "export" in the usually understood sense of that word.

It is automatically writing a program that creates a desired visual effect.

KlausDE

6,410 post(s)
#06-Dec-18 11:38

But that implies, that we will not be able to set printer properies by Mfd9 scripts, doesn't it?

I regularly have to set image resolution different from the Manifold8 default and for large formats different from the PDF drivers defaults in case they allow to control this parameter. And preferably independent of vector resolution. There are no such options in the mircrosoft PDF printer for example.


Do you really want to ruin economy only to save the planet?

Dimitri


7,413 post(s)
#06-Dec-18 14:11

But that implies, that we will not be able to set printer properies by Mfd9 scripts, doesn't it?

No, it's just a discussion of PDF.

Mike Pelletier

2,122 post(s)
#07-Dec-18 18:01

Okay more to report. I think the issue is purely a size problem. CutePDF Writer (free) does not allow setting image resolution. However, I can set it for postscript file for Canvas (graphics software). When set to 300 dpi and 36"x48" paper it works and when exporting to PDF from Canvas results in a 500 MB file.

However the Google image is poor quality. Setting it to 600 dpi gives the no memory error, which is not surprising because it would likely result in a several GB file. PDF files do not seem to work well when they get over a couple hundred MB.

The answer for these big files I don't think is PDF. Shouldn't there be another format for export for the layout? Perhaps BigTiff? I don't know what would work for sharing with others and/or alternative way of printing.

Ultimately the goal is printing in my case. I don't know if the problem is my printer, but printing 36"x48" with a 1' resolution .ecw has always worked in the past in mfd8. I tried different spooling options to my plotter, updated the printer driver, updated windows, all with no luck. Also, I tried FoxIt PDF software that my IT department has and it failed at 300 dpi.

tjhb
10,094 post(s)
#07-Dec-18 18:22

With the Adobe PDF printer driver, all options are available including vector and image resolution and image compression. The compression options include none, JPEG, ZIP, and JPEG2000.

Might be worth switching to Adobe?

Mike Pelletier

2,122 post(s)
#07-Dec-18 18:53

Thanks for the suggestion Tim. Before I purchase that and if you have some spare time, could you please check and see if it allows you to export a large map (about 900mmx1200mm) with Google image. You could use this .map project for best comparison. For me the "no memory" error pops up immediately both when exporting and printing.

tjhb
10,094 post(s)
#07-Dec-18 21:23

I've tried with your project, and have so far had no luck.

I first panned all over the layout at close zoom to cache native resolution images, then...

I tried printing using the High Quality Print settings, modified to 600 dpi, and to downsample images to 600dpi with JPEG compression. The result was the message "Tile size is too large.'

Then tried to same at 300dpi (in both places), and the Acrobat Distiller service (which drives PDF print properties) locked up fairly solid.

But: I am using Acrobat X, which is a very outdated version (the last before the subscription model was imposed). It is 32-bit. You may have better luck using a trial of the current version of Acrobat DC. Or not.

KlausDE

6,410 post(s)
#07-Dec-18 22:21

I'm not sure but I think that "Tile size is too large." is a message from manifold while rendering an image without layers in the resolution the PDF driver reports for grafics resolution. We get this same message using different drivers. And all drivers create unlayered results downsampling vector data to the grafics resolution.


Do you really want to ruin economy only to save the planet?

Mike Pelletier

2,122 post(s)
#07-Dec-18 23:49

Thanks! I reported this issue a week ago to tech along with that .map project. Hopefully it captures their attention. My comments to tech were probably not as helpful as your post and Klaus's above.

adamw


10,447 post(s)
#11-Dec-18 13:33

Thanks a lot for the report.

"Tile size is too large" is, indeed, an error thrown by our code. We are going to address the issue in two steps:

First, we will allow specifying maximum raster resolution for a layout. By default, we are likely going to set this to 150 DPI. This should help keep the amounts of processed data manageable right away.

Second, for big print jobs which do want big rasters (because the paper size is big) and / or high resolution rasters, we are going to split rendered rasters into parts. It's going to be up to the driver to store the huge amounts of data the user wants to have printed, and the print job can still fail there, but 9 will do its part and provide necessary data as requested.

tonyw
736 post(s)
#07-Dec-18 18:57

Shouldn't there be another format for export for the layout? Perhaps BigTiff? I don't know what would work for sharing with others and/or alternative way of printing.

My need is for both printing and sending final products in some generic, common format to clients. PDF format works well as my clients can easily open the file on whatever device they have handy.

For printing I use the local office big-box retailer, either self-serve or on-line upload of larger outputs for them to print. Once I make sure the paper size is correct and matches paper sizes of their copier-printers, outputs in PDF format work for me. Also I can send the PDF to clients for them to print hard copy. Despite the limitations of PDF, the generic format works for the objective of getting products in the client's hands.

CutePDF has not worked for me with M9, it stops and won't finish generating output. Fortunately my output size is small, US-letter size, so I've been taking screen shots of the layouts, pasting into MS-Word and creating PDFs from MS-Word. Not great, but the resolution is sufficient even in ledger 11"x17" size.

Re: Adobe Acrobat, I'd hate to have to buy Adobe Acrobat just to have the PDF printer driver. I use NitroPDF for PDF document assembly and editing. NitroPDF has PDF "creators" but they also stall if I try to generate PDF files from within M9.

dchall8
1,008 post(s)
#10-Dec-18 14:26

Does NitroPDF create layered PDFs? And can you edit the layers (turn them on and off to make a new default)?

adamw


10,447 post(s)
#11-Dec-18 13:37

In order to create sensible layers, we have to tell the printer when a particular layer starts and ends, and there's no standard way to do that. Unfortunately.

dchall8
1,008 post(s)
#12-Dec-18 14:20

Did the Export command in M8 work differently than M9? Layered PDFs were my bread and butter for awhile.

adamw


10,447 post(s)
#12-Dec-18 15:16

8 had an option to export PDF without going through the driver. We might add such an option to 9 in the future.

adamw


10,447 post(s)
#11-Dec-18 13:37

CutePDF has not worked for me with M9, it stops and won't finish generating output.

Does it only fail to print rasters or vectors as well?

dchall8
1,008 post(s)
#06-Dec-18 17:58

It is automatically writing a program that creates a desired visual effect.

Very interesting discussion. That certainly helps me to understand the complexity of the issue better.

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.