If you're working on a version of Cache '16.2 or higher (or any version of IRIS) you have access to %DynamicAbstractObject (DAO) and its subclasses %DynamicObject and %DynamicArray.  While most people know these classes only as our "in-memory model for JSON" they are actually much more powerful than that.  DAO was originally designed to be the basic building block for all forms of abstract data modeling; a staging area, if you will, for Sanitizing, Merging, Annotating, Redacting and Transcoding arbitrary data.  

The original vision was that data could be procured from a variety of sources (URLs, result sets, classes, globals, external files, etc.) in a variety of formats (JSON, XML, CSV, YAML, HTML, etc) to populate an Abstract Entity Tree (AET), a directed, but schema-less, graph that focuses on the semantic meaning and core relationships of the data, rather than syntactic artifacts introduced by its serialization.  Under the AET model, data is organized into nested trees of _objects_ (unordered collections of named items) and _arrays_ (ordered collections of unnamed items).  Individual nodes may be added, edited or removed using the DAO API.  Finally, the finished AET (or subtrees thereof) may be published (serialized in a variety of formats) to a stream or persisted to class instances (late-binding schema) or directly to globals (no formal schema required).

DAO currently supports kernel level support for serializing to and from JSON.  It also has an API for AET node manipulation and ObjectScript enhancements for initializing nodes directly from JSON-like expressions.

Using DAO, you could build up a single abstract data "document" complete with both meta-data and data to populate a variety of classes and/or globals, serialize this to JSON and import and unpack the payload on the target system (either by importing a file or building a small REST service (JSON travels very well over REST without the overhead of building up a full SOAP/WSDL enterprise - this can be especially useful if working on a prototype, one-off or "actual requirements still in flux" situation). 

Out of the box, unfortunately, we do not currently ship many of the other nice to have supporting routines (ResultSetToAET(), XMLToAET(), AETToGlobal(), AETToRegisteredObject(), ValidateToSchema() etc.) but many developers have made such contributions on an ad hoc basis (and some are posted elsewhere on this site).

Also, if you have specific use cases in mind, feel free to contact me, I've prototyped quite a number of DAO type conversion and serialization methods over the years, I may have something that can jumpstart your efforts.

[Note: DAO and its support for %ToJSON() and %FromJSON() is a different beast from the %JSON.Adaptor.  The latter works like our XML adaptor and generates methods at class compilation time that write JSON syntax snippets based on the static schema of a %Persistent class.  DAO processes ORefs dynamically at run time and has no requirement that classes be "JSON enabled" at compile time, and, as mentioned above, with the right glue code, DAO can also be used to serialize/initialize raw globals, result sets, %Zen.ProxyObjects, streams, %RegisteredObjects, $Lists and ObjectScript multi-dimensionals. ]

Hope this helps.

Zen doesn't have a rich text editor component of its own and ISC's development of new Zen functionality halted several years ago so nothing along those lines is in the pipeline.

However, I know several Zen adopters who were successful at creating custom Zen wrappers around both the TinyMCE and CKEditor widgets.  I'm not endorsing either of these third party tools, I'm just acknowledging that they have been made to work with Zen in the past.  I also vaguely recall one user having luck running the Dojo Toolkit dijit/Editor in an IFrame within a Zen page at one point and successfully passing data back and forth through frame references and ZenMethod calls.  I'm sure there must be others by now as well.  It's not an ideal solution, but certainly something worth investigating before resorting to writing an rich text editor from scratch.

In general, third party widgets that stick to standard HTML5, CSS and JavaScript can usually be "wrapped" as Zen components so long as they are not tightly coupled to some sort of page controller or custom page loader.  Widgets based on "highly opinionated" frameworks (such as Angular) or written in non-standard languages that require extra trans-coding steps (e.g. TypeScript) are much harder to integrate and probably more trouble than they are worth as far as Zen extensions are concerned.

Best of luck.

While the <TablePane> doesn't explicitly support the onMouseOver event, there are a few hooks you can exploit to inject whatever HTML5 content you desire.

One generic trick is to use the getEnclosingDiv() method (defined for all Zen components that map to HTML DOM elements).  This will get you a pointer into the DOM where you can use JavaScript to add event handlers, append or delete child nodes, and annotate DOM nodes with additional (temporary and/or hidden) information for bookkeeping purposes.  Just be aware that a server-side refresh will wipe out any such changes at run time so you need to either pay attention to when those are occurring, or be ready to edit the DOM again (with a onUpdateHandler() or onRefreshHandler()) should your changes get toasted.

In your case, however.  TablePane offers a more directed solution.  OnDrawCell() is a server-side callback that allows you to inject HTML into the page as the component is being generated on the server.  You can define a separate callback for each column and the system will invoke that callback once for every row in your result set, allowing every cell to have unique contents and behavior as desired.  (The "Using Zen Components" DocBook section on Zen Tables offers examples of how to use this callback and how to access the underlying query results from the handler).

It sounds like what you want to do is to inject a <DIV> element with a custom onmouseover and (maybe) onmouseout event listeners installed.  Give this DIV a width and height of 100% (so it fills the whole cell) and then append whatever visible content you want inside the cell.  This can all be achieved with simple WRITE statements or by using the &HTML<> syntax.

The callback to actually process the event can be any visible ClientMethod, usually accessed via the special JavaScript objects, zenThis or zenPage, and can be passed whatever information is needed to determine which image should be displayed and where on the page it should appear.

Hope this helped.

> Clearly it's a popular and valid use case, and I want to know more about it.

While it's been proven (mathematically) that JSON and XML are equivalent serializations (anything that can be expressed in one format can be expressed without data loss in the other), historically and philosophically they are different beasts.  Like its parent SGML, XML was originally envisioned as a generic, robust, declarative 'save file' specification between authoring software (producers) and 'smart' renderers (consumers).  Over time, as needs changed, it evolved to service other application areas; many mature, complex processing tools have arisen to enable XML to address these issues, especially in the transformation and database arenas.  JSON started as a net-friendly, platform-independent, datagram serialization scheme.  Its roles have also diversified and its clean, flexible simplicity has lead to a position of dominance across the internet and in far reaching applications, ranging from document databases and web security to future-proofed remote procedure calls and robotics control.
 
From a data collection, persistence and projection point of view, XML came from a time when the prevailing opinion that, due to limited resources and platform impedance mismatches, one should make the data match the schema and throw away anything that doesn't meet expectations/anticipated need.  In contrast JSON hails from the Wild West of the Web, where a typical cell phone has more computing power than the space shuttle did, data is everywhere and growing exponentially, and, perhaps most importantly, flexibility in the face of the unexpected is often a job requirement.

Veteran XML developers, especially in the DB world, often confuse JSON's lack of a formal, predefined schema for a lack of structure, consistency or means of validation; this is not a fair assumption.  JSON can be quite organized and well structured, and conventions (though not yet standards) for "schema" validation exist.  The key difference is that serializing _all_ the data (even if subsets of that data have radically different internal structures) is job one, and pruning, massaging and reformatting area of that data pool to align with a particular schema is an optional step, late in the process.  When 'validating' XML often takes an all-or-nothing approach in the name of total data integrity; Applications relying on JSON frequently embrace more of an "Is it good enough for me to do my job?" approach and "my job" is often just one small lambda production in a larger pipeline that may never need to look at the data in its entirety.

The practical upshot of these two mind sets has direct bearing on why JSON took the web by storm (as well as contributing to the overall acceptance of JSON as the dominant serialization standard in use today where XML once reigned supreme).  When dealing with cross platform communications, an XML schema is basically the lowest common denominator that all parties can agree upon; the application can't move forward unless all parties agree to move in lock-step.  JSON's approach to a late-binding (if at all) schema and broad latitude with respect to what it means to be 'compliant' allows for natural evolution of the data streams; consumers of the JSON may elect to simply ignore unexpected key-value pairs any only raise exceptions if the provided data does not allow for completion of the job at hand.  

Between the number of moving parts and the rate of change on the internet as a whole, XML proved to be simply too strict and too cumbersome to compete for anything other than its original purpose, a declarative mark-up language (HTML5) intended for a smart renderer.  Over time, predictions that JSON's fast and loose approach to data checking would result in unmaintainable chaos, have proven (in general, there are always exceptions) to be incorrect and multiple software engineering studies have since found that application feature points built around JSON are both faster to develop initially and cheaper to maintain over the lifecycle of the application than equivalent feature points revolving around XML.  This has been slowly eroding legacy web services like SOAP and SAML in favor of REST and OIDC and efforts to  interact with (or convert for retirement) older XML based systems has created high demand for XML to JSON conversion utilities.

Due to this demand, there exist a number of third party utilities that will do a blind conversion from XML to JSON.  In many cases this can suffice.  The downside to this approach is that you don't really control the process, it's serial in / serial out without an exposed model in the middle to edit the pipeline which forces people to re-import the JSON into something they can manipulate, change, then export again to produce the final product if the default given by the third party tool doesn't cut it.

With respect to InterSystems' own technologies, we provide a number of options to address this issue (but not an out-of-the-box turnkey solution).  

One approach is to make use of the XML and JSON Adaptors.  You could: 1) Define (likely a series of) classes that align with the structure of your XML based on an XSD - all classes would need to be both XML and JSON enabled; 2) Import the XML document into your class structure using the XML adaptor; and, 3) Export the class tree using the JSON adaptor.  This should work and is very much in keeping with XML's schema-driven mind set.  It also allows for data validation, persistence, etc. but from an level of effort standpoint, you'll be paying for the ability to validate, the ability to persist, etc. whether you want/need those features or not.

The other approach is more heuristically based and 'JSON-esque' in its treatment of the data stream.  Using %DynamicArray and %DynamicObject it is possible to build (blindly) an abstract entity tree in memory that represents the core elements of the XML document model and then simply call %ToJSON() on the root of the tree.  This involves parsing the XML (we have utilities to help with that) and then applying a few heuristics to drive a consistent conversion:
 1) A tag name/element becomes a key name in the parent of the current node
 2) An attribute becomes a key name of the current node
 3) A serially repeated element maps to a %DynamicArray to preserve lexical order 
 4) A non-repeating element with either attributes or children maps to a %DynamicObject
 5) Attribute and element values of "true" and "false" map to JSON tokens true and false
 6) Attribute and element values that parse as numbers map to JSON numerics
 7) Non-boolean, non-numeric string values map to JSON strings
 8) Simple, singular values embedded within elements map to scalars
 9) Elements with a mix of text nodes and elements embedded in their children get special treatment
     a) A new key is added with a conflict free name such as "0$Kids"
     b) This new key is a sequence of nodes (%DynamicArray)
     c) Array entries are either JSON strings (mapping from Text nodes) 
        or child objects (mapping from nested elements)

For example:
<record id="2" name="example" vital="false">
  <unit>
     <page>0</page>
     <text>Hello</text>
  </unit>
  <unit>
     <page>1</page>
     <text>World</text>
  </unit>
  <coda>
     End of <bold>sample</bold>!
  </coda>
<record>

Becomes:
{
 "record": {
    "id":2, "name":"example", "vital":false,
    "unit": [
      {
        "page":0, "text":"Hello"
      },
      {
        "page":1, "text":"World"
      }
    },
    "coda": {
      "0$Kids": [
         "End of",
         {"bold":"sample"},
         "!"
      ]
    }
  }
}
           
By blindly and recursively applying these rules to an XML DOM, a simple class method can build up an abstract model of the data while dispensing with XML-specific syntax.  The model can then be serialized to JSON simply by calling %ToJSON() on the root.  If the blind conversion needs to be cleaned up, you can tweak the resulting tree using the API for %DynamicAbstractObject, adding, deleting and restructuring specific nodes as desired before making the final call to serialize.

XML natively supports a more complex set of distinctions and constraints than basic JSON and the  conversion 'rules' above dispense such XML specific artifacts so reversing the process is more difficult.  Were one to write a %ToXML() method for %DynamicObject, one could start with a similar set of heuristics, but to get acceptable output from an XML perspective, there would have to be a schema (such as an XSD) supervising the process.  This is quite doable, but is more complex and beyond the scope of what the original poster needs for his use case.