· Aug 21, 2017 3m read

Handling images with Caché & JSON, and why 57 is a magic number

If you want to dynamically serve images as a property of JSON then there is no perfect encoding solution. One method used frequently is to Base64 encode the image. Whilst there are some negatives to doing this, such as data inflation, there are some positives to working with Base64 images inside the browser.

Let's say you have an image placeholder on a web page...

<div id="image-container"><div>

And you fetch a JSON message from the Caché server containing the image as one of its properties...

var msg = JSON.parse(client.responseText);

Without needing to decode the image data you can create an img element and append it directly to the place holder...

var img=document.createElement("img");
var src="data:image/jpg;base64," + msg.Image

and the Image will display.

Here's a jsfiddle example...

The parallel question to this is how to safely convert the image to Base64 in the first place. One option is to use the next release of the Cogs JSON library which will be out this week. The new release will add even more ways to work with JSON, as well as supporting stream properties, and auto-converting binary streams to and from Base64.

If doing this from scratch, then converting an image stream to Base64 seems simple enough...


except that there are a few problems to consider.

The first thing is that Base64Encode is expecting a string. This is fine if you have long string support enabled, and all of your images are smaller than @ 3/4 of 3.6MB

To get around this limitation, the image has to be encoded in chunks.

Base64 works by taking 3 bytes and converting those 3 bytes into 4 bytes using an algorithm that ensures the 4 bytes are all in a safe ASCII range. Where an input string is not divisible by 3, the algorithm appends one or two zero bytes, which are converted to = or ==, which is why you often see a Base64 string end in these values.

The take from this is that encoding in chunks will produce odd results if each chunk is not exactly divisible by 3 (otherwise you end up with = and == values inside the string). So make sure the read length has a value such as 333.

There is also another issue. By default, the Caché Base64Encode method outputs the encoded string with line breaks every 76 characters. These line breaks will make the JSON invalid, so either you have to remove the line breaks, or escape them, neither of which is pretty and for me always seems to break the Base64 at the other end. Not only will it break the JSON, but the line breaks at the other end will also break the JavaScript img value.

Fortunately, if you are on a newer version of Cache then there is now an argument that will disable the line breaks. Unfortunately, it's not in older versions (such as 2014 that I have tested this on.)

There is however, a simple workaround that will be backwards compatible, which is to use a read length of 57 (or less). This ensures each chunk is under 76 characters in length and avoids the line break being added.

So a manual solution might look something like this...

 write """Image"":"""
 do ..Image.Rewind()
 while ..Image.AtEnd=0 { write $system.Encryption.Base64Encode(..Image.Read(57)) }
 write """"
Discussion (4)2
Log in or sign up to continue

In general yes, it's much more performant to serve images separately, particularly for static content and high volume image sites (flickr etc).

Browsers supporting data:image/jpg;base64 was not designed with JSON in mind, I think it was more to do with embedding images in an HTML page to reduce page load speed (reduce the amount of requests), where for instance there are lots of small icon images.

Reducing requests might not be a strong reason to combine images, but the fact that this is an established practice does cancel out some arguments against one slightly large request vs two normal sized requests.

So if these types of reasons are not important, the remaining arguments are less why not, but more why would you want to.

One good reason is if you want to move binary data and meta data in one single message, particularly if you require transactional integrity. This can be achieved by sending the meta data and binary data in separate multi parts over HTTP, or the binary data can be embedded inside the meta data. This latter practice has been around for some time where images and documents are embedded inside XML over SOAP.

The continued popularity of JSON for web API's means that more and more developers are looking at the same problem of moving meta data and binary data around in a single message / transaction, not just to browsers but between servers.

If you look at the FHIR specification, for instance, you will see it has a base64Binary type.

As you say, it may be preferable to make a separate request, but maybe only where performance is the strongest non-functional requirement. If this is not an issue, then there are few anti reasons to counter argument the benefits of a single message object.

I would to mention that there are at least two big problems with serving images in JSON:

1. No client caching

2. No lazy-loading

These two things can be critical in SPA's with big amount of images (I suppose it's true for other static content too). No client caching also means more requests in a long term and higher load on the servers.

There are also other downsides possible.

Also, with http/2 it will become unnecessary to bundle all the resourses in one request for improving loading speed.

Hi Sergey,

> 1. No client caching

You will obviously know this already, but JSON can of course be cached (with its image) on the client if required.

However, images served in JSON are typically private images served via a secure API and should never be cached. As such, all caching benefits go out of the window anyway, whether your using JSON or not.

> 2. No lazy-loading

Fetching images dynamically in JSON does not stop it from being a lazy-loading technique, if anything it is perfectly suited to loading images just when they are needed.

> There are also other downsides possible.

There is really only one technical downside, images posted as Base64 are inflated by 37%. If you are paying for network bandwidth by the $ then you might think twice, even if web server compression will bring this back down.

Of course continue to serve static unsecure images as binary over HTTP, particluarly if you want to leverage on automatic image caching.

If you are building a rich API (REST/RPC) with a JSON payload, then don't be afraid of embedding images. This will make for consistent API's.

When we build SOAP interfaces, we don't expect a SOAP client to drop out of the SOAP protocol to raw HTTP to make a seperate request for an image. How would we describe this via a WSDL to the soap client? How should the authentication for this be implemented? Some kind of token that needs its own seperate emergency hatch in the server side API?

The same will eventually be true for REST and JSON-RPC. It doesn't matter if this is server to server, or client to server, what we really want is a consistent API that can be described and authenticated in a consistent way (e.g SWAGGER + OAuth 2.0).

Bottom line, to say there are "big problems" is a little FUD, and I hope will not deter other developers from considering this approach before they have done their own research.