Model or Collection suitability for monitoring app

I have worked through the examples and have read all the documentation. In my application I am monitoring a machine which has e.g. 60 copies of a particular function. This is all processed through a micro with a binary coded message. I have the binary unpacked and written a Go struct based model over NATS. Many of the data types are slices sized to the machine e.g. 60 but it could be 30 for another machine. The operating status of the machine is to be observed in a browser or mobile client so the slices will ultimately be updated across the network to the clients. I have no need to individually address any particular unit out of the 60 units in the slice except on the client for display purposes. The updates will come in bulk form the micro updating the entire slice by replacement and sending resgate an update notice.

So in looking at resgate and the go-res server application I could probably work my problem by making the model look like a message with a JSON encoded string as the payload. The client could then simply unpack and display. Since resgate won’t need to update individual items inside the slice and in one place the slice contains a 4 element model would I be right in thinking this would solve my problem.

I have also considered that resgate might be too big a hammer for what I am doing. I however like the update function. There will not be very many clients connected but what you have done also solves a lot of problems for me since this is a new application I have started in Go I have a lot of latitude for selection.

A different person is working on the web/mobile client code and is using React. The integration of the res Javascript client with React seems easy.

I think NATS is the best message bus for what I am doing, fast, simple and solid as a rock.

Thanks for any thoughts,
Ron

Welcome to the forum, Ron! :partying_face:

Let’s see if I’ve understood your question.
Assuming your Go struct looks like this:

type Foo struct {
	Bar []struct {
		ID   int    `json:"id"`
		Info string `json:"info"`
	} `json:"bar"`
}

Where the Bar slice has a length of 60.
Now, it is possible to do as you suggest, to encode the entire thing and put it in as a string value:

type Model struct {
	JSON string `json:"json"`
}

dta, _ := json.Marshal(Foo(f))
model := Model{JSON: dta}

… and then, whenever there is a change to the Foo struct, you would encode/marshal the entire struct again and send the resulting string in a change event.

This is fully possible. Your javascript client would JSON.parse the string after getting the model, as well as on any change event which would provide a new json encoded string.

There are some drawbacks to this though:

  • Having json encoded strings within json is seldom encouraged
  • The entire struct/string is resent to Resgate and clients on the smallest of changes
  • The only way of knowing what part of the data that changed is by doing a manual diff with the old version of the data.

But in your case, these drawbacks might be acceptable.

Alternative

Instead, you could serve the data as separate resources/units linked together with resource references. This is closer to how Resgate is intended to be used (but not necessarily best in your case).
You could have the following resources:

micro.foo (model):

{
    "bar": {"rid":"micro.foo.bar"}
}

micro.foo.bar (collection):

[
    {"rid":"micro.foo.bar.1"},
    ...
    {"rid":"micro.foo.bar.60"}
]

micro.foo.bar.$id (model)

{
    "id": 1,
    "info": "Uno"
}

When the client requests micro.foo, then Resgate will send all of the pieces in a single response back to the client. So, client wise, it would be more costly.

Pros:

  • No json encoded strings within json
  • The client will not have to do the json.Parse thing
  • Updating each unit separately would reduce traffic

Cons:

  • First time, Resgate needs to fetch all 60 units in separate requests. Later it can use the cached version.
  • Might be more complex to serve in your use case

Also, if you needed to listen for changes on each unit, like you would when using reactive “frameworks” like modapp, then this alternative would be even more recommended. But since you are using React, I don’t think this will be an issue.

In any case, both solutions would guarantee that the client view is kept up to date in real-time! :slight_smile:

Best regards,
Samuel

Samuel, thank you very much for your detailed response ans for verifying what I am thinking of will work but with the cons you mention.
My data looks mostly like this
struct X {
Value1 int,
Status [] bool,
Reading [] int,
AnotherReading []float64,
}

I did have a slice of structs but found there was much less tag data when I converted to a struct of slices when converted to JSON. The string size was about 1/3 for all the structs combined.
Right now I have 8 slices and another unit type will likely add 8 more slices to the entire state of the machine. Much of the data is a time series of readings that all must be logged I have been looking at InfluxDB for that aspect of the solution. The user interface is to observe how the machine is operating so many of the individual rows will change over time.
With that in mind I think I will go with serializing the entire reading type and call that the model item as already JSON encoded. I believe the user interface side can deal with that. That will also reduce the event count. I could build the reference arrays programmatically using the more usual approach you give as option 2. I will keep that option open, possibly benchmark both ways with typical data.
I already look for identical sequential readings when pulling data off the binary encoded micro processor embedded system that is at the far back end of the system. I don’t send when the new data is the same as the old data except in the case it is time series data where the readings are accumulated for totals as well.

Again, thanks very much for your help.
Ron

Hi again Ron!

JSON can produce quite a lot of excessive key strings for objects/models, that is true.
There is a task to activate support to Resgate for per message compression for the WebSocket connection, which would render that a non-issue, as gzip can compress repeated strings pretty well.

But that would only solve Resgate-to-client package sizes. Service-to-Resgate packages would be unaffected.

Your structure, with slices of primitives (bool, int, float64) would be even easier to use for both of the alternatives.

  • The JSON encoding solution would not have to resend the entire struct - only the affected slice
  • The resource reference solution would get a limited number of separate resources: 1 resource for struct X, and one for each slice.

Just to add one thing to that alternative. Let’s say your Reading slice has the resource name:

micro.x.readings

With the content:

[ 1, 2, 3, ... , 60]

When your micro service detects a change in the Reading slice, it could just send a system.reset for micro.x.readings. Resgate would take care of the rest by fetching that data again, and generate the events required to mutate into the new state, and send those events to the client.

It would potentially reduce traffic size to the client, as the client would not get the entire slice sent again, but only events describing the changes.

This is true if only a few values changes in a slice each time. However, if many/all values changes each time, it would rather have a negative effect.
So, your JSON encoding solution might still be the most efficient.

Keep us posted on the solution you went with!

Best regards,
Samuel