New toolsets for enterprise-architecture

(There are several posts queuing up for publication here about this shift in direction towards ‘maker of tools for change‘, but this one is a bit more urgent to support a key conversation that’s happening right now. The other posts will follow soon, I promise! 🙂 )

There’s general agreement, I think, that most of the toolsets we have available to us at present for doing enterprise-architecture and the like seem to range from “somewhat-useful but not much help as we really need” all the way downward to “expensively worse-than-useless”. Even the best of them only give us good support for the very last stage of architecture development, the ‘final-diagrams’ and ‘models of record’ that are typically ‘tossed over the fence’ for others to implement. And most of the toolsets are really bad – in some cases to the extent of being an active hindrance – at the one thing we most need them to help us with, namely the exploratory ‘messiness’ that all but dominates the early to middle stages of any new development. So let’s do something about this: let’s define what we really need our toolsets to do.

Yeah, I know, I’ve written quite a lot on this over the years: the first reference I’ve found in a post here was from quite soon after I started this blog back in late 2006. Two recent posts that summarise some of the core ideas-so-far are:

It’d probably be worthwhile to (re)-read those two posts before continuing here.

By now, though, it really is time we got down to doing something practical about this – in other words, developing a real toolset that works the way we need. Hence what follows here summarises not only my current thinking about toolsets, but also where I’ve gotten to so far in terms of something that’s actually implementable, and for which even I could do something useful towards making it happen in real-world practice.

I’ve split the description below into various categories, with additional notes:

  • points headed ‘[Discussion]‘ indicate items that I know will need further exploration
  • points headed ‘[Implementation]‘ indicate items where I’ve already started on the detailed-design or even preliminary code for a proof-of-concept

Okay, here goes…

Fundamentals

The term ‘toolset’ here should be considered more as a type of platform, rather than a (probably futile) attempt to create single ‘toolset to rule all toolsets’. This conceptual platform needs to support any relevant type of information, created and edited in any appropriate way, across any appropriate type of device and user-interface.

In essence, everything in an architecture and its context ultimately depends on everything else. Existing model-types such as UML, BPMN or Archimate may be useful as views into the ‘hologram’ of that ‘the everything’, but should not be allowed to arbitrarily constrain the structure or the permissible views into that structure.

Because everything depends on everything else, every ‘thing’ needs to be treated as an ‘equal citizen’ within the overall hologram, and may be viewed, explored, examined, connected, in exactly the same overall way as for every other type of ‘thing’ – represented internally by the respective data-object. Models and model-types may constrain the views and, to some extent, the connectivities between ‘things’, but should not exclusively define them: that distinction is somewhat subtle, but crucially important. Data-objects may be worked on or edited by any number of appropriate ‘editors’ (model-types etc): the editor acts on the data-object, but an object is not inherently dependent on that editor for its structure or existence.

(See notes later in this post on structure, functionality and definition of editors.)

The core purpose of the ‘hologram’ is to support at least two key questions that can be asked about any object:

  • tell me about yourself (and why)
  • tell me what you’re associated with (and why)

It should also be possible to ask key questions about objects in relation to time:

  • tell me about your history (including change-history for the object-record itself)
  • tell me how you expect to change

Data-structures

As ‘equal citizens’, every data-object has the same conceptual structure:

  • unique-identifier
  • mandatory function-identifier(s) (type, role etc)
  • 0..n data-items, such as editable fields or non-editable tags (supports “tell me about yourself”, “tell me about your history” and “tell me how you expect to change”)
  • 0..n links to or between other data-objects (supports “tell me what you’re associated with” and “tell me how you expect to change”)
  • 0..n change-history records (supports “tell me about your history”)

— [Discussion] Some details of the structure – and, particularly, how best to implement it at scale, in a multi-user context – still need to be resolved. The core concept I developed some while back was the notion of a ‘mote’:

Extending the ‘equal citizen’ concept:

  • a ‘thing’ is represented by a data-object that has (or connects to) any number of fields and tags, but perhaps no links
  • a ‘link’ is a specific category of data-object that is still a ‘thing’, but carries data for one or more (typically but not always two) link-connections
  • a detached ‘field’ is a specific category of data-object that carries data, and a single link-connection that points to its parent ‘thing’
  • the ‘role’ for a data-object indicates its category in the sense of ‘thing’, ‘link’, ‘field’ etc – yet each still has essentially the same type of structure

— [Implementation] The data-structure could be implemented in several different ways, each of which implies different trade-offs. In my explorations so far, it looks like it could be implemented in SQL with something like 3-6 tables, or as a single ‘collection’ (table) in a NoSQL database such as MongoDB. I’ve also looked at graph-databases such as Neo4J, but I can’t see any real advantage as yet. Either way, we’d probably need a data-abstraction layer to enable the same data to be managed across different implementations that use different underlying data-technologies.

Although initial implementations could get away with something simpler, ultimately the data-object identifier does need to be globally-unique, which suggests a necessity for UUID or similar.

Data-exchange

All implementations must support a shared data-exchange format, such that data-objects can be shared across any other implementations and instances as appropriate.

This data-exchange-format must be able to support all data-object types – including editor-definitions, as described later. Within the data-structure, linking is done by reference to data-object-ID, hence recreation of links would not itself need to retain table-integrity within the data-exchange format: however, user-interfaces and queries should be able to operate without failing if links cannot be completed or link-targets are ‘missing’ (see ‘Access-control’ later).

This shared data-exchange format must be non-proprietary.

Other data-imports and exports should be supported, in a wide variety of forms. Where appropriate, imports and/or outputs may be in proprietary formats, or reference external data-objects that are stored only in closed and/or proprietary formats.

— [Discussion] Probable candidates for the common exchange-format would include XML, JSON or Extended JSON (to support binary-data). Given the constraint that data-objects may have any number or type of fields, XML may not be flexible enough for the purpose.

— [Implementation] At present, I’ve assumed a JSON implementation for the exchange-format. Import and export would be added as implementation-maturity develops – for example, export to email, HTML and/or PDF would be fairly high priorities. Import from JSON should be sufficient at first, as simple standalone code-utilities can do basic translation from other formats into JSON.

Access-control and linking

Ultimately, the platform should support access-control for view and/or edit at field level – not just primary-object level. Fields would typically inherit access-rights from their ‘parent’ at time of creation, but could be amended as appropriate.

Although access-rights should be attached to individual data-entities, the access-rights on data-objects should not be visible (other than in specific administration user-interfaces), or transferred in plain-text form across inter-system exchanges.

Users acquire access-rights via logon, and thence via assigned membership of groups. However, basic editors and data-storage should allow a default ‘public’ option for which no logon is required. (In part this is to allow ‘test-drives’ and/or fully-standalone operation.)

Queries should not return data-objects for which the user does not have view-rights. Where appropriate, these constraints should – as above – be supported both at whole-object and individual-field level. (Note that this is likely to mean that some links will seem ‘incomplete’.)

Editor user-interfaces should not allow edit on data-objects for which the user does not have edit-rights. (Only visible-objects – objects with access-rights to view – should have been returned from queries.)

Where access-controls limit visibility at field-level, editor user-interfaces should not show anything for items which are not visible – for example, field-captions should not be shown for those items.

Where appropriate, individual editors and the platform as a whole should support linking at whole-object and individual-field level. This should not be constrained by the underlying data-structure: for example, whether fields are embedded ‘in’ a parent, or as separate-but-associated data-objects.

— [Discussion] There will obviously need to be a lot of further discussion on the user/access-control model(s) to be used for this…

— [Implementation] Linking to individual-fields can be supported by some equivalent of the HTML ‘anchor’ concept. For access-control, I already have full source-code for a simple user-access mechanism from a PHP-based wiki that controlled access via user-groups down to page-level: this would probably be sufficient for initial experiments, and maybe even for first public tests.

Devices and interface-metaphors

The platform should be accessible and usable on any device across the entire toolset-ecosystem, from multi-screen ‘war-room’ setups right down to limited but web-capable feature-phones, and maybe even to dumb-phones via SMS and MMS. (Implementations and capabilities may and will be radically different across the toolset-ecosystem, but the platform and its underlying data should be available throughout.)

Access to large shared-repositories would typically require internet-access, but the platform will also support offline use on any appropriate device, subject to device capability and capacity.

The platform will support any appropriate type of user-interface metaphor or technology. (This also means that we need to provide conceptual ‘hooks’ to support user-interfaces that are at present barely experimental – such as 3D touch-based systems – or don’t yet exist at all.) In practice, this is somewhat of an aspirational goal, but given current technologies, the current ‘desired’-list should probably include:

  • SMS/MMS request/response (1D)
  • keyboard/mouse (2D)
  • touch/gestural (2D)
  • hands-free gestural (2D/3D)
  • side-menu / popup-menu / drag-and-drop etc

— [Discussion] This too will need a lot of discussion – mainly around prioritisation of interfaces and interface-choices.

— [Implementation] My current implementation-design is based on Javascript/HTML5 web-technologies, which could be auto-configured to support keyboard/mouse and touch/gestural interfaces. The Meteor platform that I’m experimenting with at present also supports offline-use, via a local simplified-MongoDB instance.

Editors

Editors are the means by which data-objects and links can be created and edited.

The platform will support any number and type of editors, which may be used on any type of device – subject to device capability and capacity – and act on underlying data in any appropriate way.

The platform will support ‘official’ or ‘controlled’ editors – those that require specific capabilities, or direct access to user-interface code, access-control code or underlying file-system. For security-reasons, the source-code for some – maybe most – of these should not be publicly available, and in some cases may be proprietary. Some of these editors – particularly the ‘official’ editors developed by the core-group and typically provide core-functionality – may expose script-interfaces that ‘unofficial’ or user-defined editors can call within their scripts, to access, in a controlled and safe way, functionality that would or should not otherwise be available.

‘Official’ editors would be installed via a package-load mechanism, which may support encrypted proprietary source-code for server-side only that is not directly accessible from the user-interface.

The platform will support user-defined or ‘unofficial’ editors, defined via standardised scripting-language, or, later, some form of graphic or drag-and-drop ‘editor-builder’ interface. By this means, users will be able to define editors for any type of model that they need, that can create, access and link data-objects in the ‘hologram’ – including, where appropriate, data-objects and links not previously created by that editor-type.

User-defined ‘unofficial’ editors would be installed via loading and compiling a specification defined in the scripting-language. These editors may be saved to and imported from an optionally-editable text-file. The file-format for specifications for user-defined editors shall be open and non-proprietary. Language-elements should support separate loading of graphics and any other non-text items needed by the editor-specification.

Each editor may create new data-objects in accordance with the editor’s own data-specification, including numbers, names, types and default-content for fields and tags. Editors may be configured to create data-objects of multiple-types (e.g. as per UML, BPMN, Archimate). New data-objects are tagged as ‘belonging’ to that editor, but may be manipulated, extended, linked-to etc by other editors as appropriate.

As earlier above, the underlying data-structure does not impose any inherent constraints on linking: any object may potentially be linked to any object, including itself (for recursive links). An editor may constrain visibility of links to those that it recognises as valid, but may not remove or delete any that it does not recognise as valid in its own terms. (This allows data-objects to be used in different editors and different views, in ‘controlled’ form as per the respective editor and view, yet still each retain its overall integrity as a ‘thing’ in its own right.)

An editor may constrain linking-rules for data-objects: a probable mechanism to do this is via matching of tags as defined in the editor’s data-specification for link-types and for primary data-objects. (This mechanism would enable the kind of formal-rigour required in e.g. UML, BPMN and Archimate.)

Within its specification, an editor may define field-labels etc for display on user-interfaces, including printing. (This would also provide a means to support internationalisation, as multiple label-sets could be defined within an editor-specification, or variant editor-specifications – same data-structure, different labels – used for different languages.) If fields are visible in terms of access-control, but the respective editor is not available to the user-interface (as may happen with imported data, or when the device is being used offline), the respective field may be shown in non-editable form on the user-interface, with the data-object field-name in place of the respective editor’s field-label.

— [Discussion] Again, a lot of discussion needed on this, though I’ve already roughed-out the core-elements for an initial version of the scripting-language for user-defined editors.

Some of the ‘official’ editors we’ll need, in approximate order of implementation:

  • basic form-based create, view and edit (provides base-functionality for all data-object creation and edit)
  • basic linker (provides base-functionality for all linking)
  • basic grouper (provides base-functionality for all ‘group’-types – sessions, themes, topics, project etc)
  • tabular-type edit (provides support for table-based interfaces and data)
  • basic graphic-base (provides base-functionality for scalable background-graphics and image-maps)
  • free-form sketcher (provides drawing capability, including user-selected image-maps)
  • builder/linker (provides functionality to support object-plus-link models such as Archimate)
  • media-capture and embed (uses links to device-hardware for audio, photo, video – embed by value [data] or by reference [URL etc])

The ‘unofficial’ or user-defined editors could be anything at all (subject to what the platform can support at the time, via the functionalities provided by the ‘official’ editors), creating new data-objects or amending existing ones in any way that makes sense for that person, and linking anything to anything else in any way that makes sense. That ‘anything-connects-with-anything-else’ is where the power of this platform will really reside.

Note also that the ‘equal-citizen’ concept means that although an editor can provide views that support a virtual concept of layers or suchlike (as in Archimate or Enterprise Canvas), there is, by explicit intent, no actual layering hardwired into the underlying data-structure. For more detail on the reasoning behind this design-decision, see the posts ‘On layers in enterprise-architecture‘ and ‘Unravelling the anatomy of Archimate‘.

[Implementation] I’ve roughed out all of the code for basic form-based, basic linker, basic grouper and tabular-edit and the first parts of basic-graphics, and how to define them in the scripting-language for editor-specifications. I have some ideas about Javascript libraries and the like that could handle the deeper parts of the basic-graphics and free-form sketcher. Beyond that, though, I’ll definitely need help!

Query, search and navigation

The platform shall support appropriate query, search and navigation.

For query, a query-builder will be needed that interacts with the currently-selected editor to define permitted parameters for the search. The query-language specification shall be open and non-proprietary – at least for queries and searches available to user-defined editors.

For search, the current query will be combined with the current access-control rights to define the object-set to be searched for and returned to the respective editor.

For navigation, clear distinction needs to be drawn between navigation within the object-set from a current query, versus SEO-type auto-navigation for indexing-bots. The former operates primarily or exclusively on the client; the latter in effect operates solely on the server for the data-repository.

— [Discussion] This is another theme that will need a lot of discussion – though there’s sufficient design already in place for basic-level querying and searching, and for Javascript-based navigation and routing on web-type user-interfaces.

[Implementation] As per immediately above, I’ve roughed fairly-detailed designs for that basic-level query, search and navigation.

Change-history

In principle, the platform should maintain a full change-history of all changes: other than specific admin-cleanup, nothing gets deleted. This should enable ‘replay’ (conceptually similar to ‘scrubbing’ in audio or video), to review past history and to permit cloning or branching at any point.

— [Discussion] This is fairly straightforward at a basic per-object level, but would soon gain real complexities with challenges around access-control, real-time multi-user and individual-object versus multi-object sessions.

— [Implementation] I’ve roughed out code and data-structures for a basic per-object level. Beyond that, I’ll definitely need help.

Okay, stop there for now, I guess.

Over to you for comment and advice, anyway, if you would? – many thanks!

16 Comments on “New toolsets for enterprise-architecture

  1. Re: the data structure – definitely agree w/ what you’ve listed there in principle. One question, one observation:

    – Would the attributes of a particular data object be full-fledged data objects themselves? I can see pros (rich and flexible) & cons (might be a bit bulky, particularly depending on the depth of recursion).

    – I wonder if the role/type (depending on how you define it), might be best linked to the attributes. This would mean that while all attributes are linked to the data object directly, you could use the role/type relationship to constrain which attributes are available to a given tool.

    I also agree re: the persistence choices (SQL or document DB). As you stated, there are trade-offs with either.

    • Thanks, Gene – much appreciated!

      @Gene: “Would the attributes of a particular data object be full-fledged data objects themselves?”

      Yeah, that’s a core question, and I’ve been round that same loop several times now, because of exactly those trade-offs you mention. The only viable answer I’ve found so far is “it depends” 😐 – which means that in practice we need to be able to support both, in any appropriate mix. (The mix will probably need to vary depending on the underlying database-technology, too.) The key point so far is that that structure described above can support all three options: fully-embedded, ‘semi-detached’ (i.e. dependent child, as in the classic SQL pattern) and fully-independent (‘equal-citizen’ with link-pointer to parent. And there are identifiable mechanisms to support the change-history in each case.

      The real trick and challenge there is not so much in how we structure it in the database, but how we describe it in the data-exchange file-format, such that it can be stored in one mix of options for one database-technology, but unpacked on import into a different mix that’s better optimised for another database-technology.

      @Gene: “you could use the role/type relationship to constrain which attributes are available to a given tool”

      There’s a difference between role/type for the data-structure itself, versus role/type for a given editor. (Remember that a data-object may accumulate ‘child’-fields from any number of editors, not just one.)

      The role/type for the data-object itself is part of the ‘mandatory-data’ section, and is only a fairly small range of core-options (‘thing’, ‘field’, ‘link’ etc) – it’s about how the data-system works, not how the editor-system works.

      The mechanism I’ve been exploring so far for the editor-side is based on tagging: small non-editable text-string tags that provide easy targets for filters and suchlike. A data-object’s fields, and the data-object overall, acquires tags, initially from its creating-editor, but also from other editors as required. An editor identifies the fields that match up to its own tags, and displays them accordingly. We can also provide an option such that, in some of its views, an editor may additionally show fields that are not directly supported by that editor – though probably shown only in read-only form.

      We can also use the same tags as targets for links, to constrain what types of links may be attached to what type of data-object, in an editor (e.g. Archimate, UML) that requires formal-rigour on linking. Interestingly, doing this via tagging turns out to be a lot simpler than the classic method of ‘define your objects, then define your permitted links between objects’, where the number of definitions rises near-exponentially with the number of object-types.

      Hope that makes some reasonable sense so far – any comments on that? Thanks again, anyway.

      • “The only viable answer I’ve found so far is “it depends””

        What was it Willy S. said about petards? 🙂

        “The real trick and challenge there is not so much in how we structure it in the database, but how we describe it in the data-exchange file-format, such that it can be stored in one mix of options for one database-technology, but unpacked on import into a different mix that’s better optimised for another database-technology.”

        Agreed…my typical MO is to come up with an internal API to and from which physical storage and interchange formats are just a mapping exercise.

        “There’s a difference between role/type for the data-structure itself, versus role/type for a given editor.”

        Ah, okay, that was the missing piece I needed.

        “(Remember that a data-object may accumulate ‘child’-fields from any number of editors, not just one.)”

        Indeed. I think we’re on the same page with the tagging you mentioned. This would allow for the same ‘thing’ to be shared across multiple editors while retaining identity. Each editor can attach their own unique attributes as well as access those that are shared while ignoring those that don’t apply to its context.

        There might even be a use case for some sort of ‘God Mode’ analysis tool to find and visualize multi-faceted ‘things’ in the repository.

        “Hope that makes some reasonable sense so far”

        Absolutely.

        Is it reasonable to say that we basically have a CRUD API for the repository for the use of the editors? I could potentially see it also needing a capability for applying a ‘profile’ (collection of attributes, etc. related to an editor) to a thing at the repository level, _if_ we stored metadata in the repository about the profiles.

        • Thanks again, Gene!

          @Gene: “my typical MO is to come up with an internal API” etc

          Great! – because that means I have someone to call on to help me get out of a mess on database-design and implementation. 🙂 (I have a fair amount of experience with databases of various types, but, to be honest, sufficient only for fairly simple prototypes – not the serious multi-user/proper-security/proper-scaling etc that’ll be needed very quickly for this as soon as we get much further than a decent proof-of-concept.)

          @Gene: “This would allow for the same ‘thing’ to be shared across multiple editors while retaining identity.”

          Yes, exactly – that’s the whole point of all of this.

          @Gene: “There might even be a use case for some sort of ‘God Mode’ analysis tool to find and visualize multi-faceted ‘things’ in the repository.”

          That’s actually one of the first editors that we need to build, because it’d be the only way we could fully explore the content and structure of any arbitrary item. Interestingly, it would probably be one of the simplest to write, because all it has to do is read the entire content in point-by-point order, without having to cross-map back to another editor.

          @Gene: “Is it reasonable to say that we basically have a CRUD API for the repository for the use of the editors?”

          Yes, plus query-language and background administration-type stuff. (In effect there are only three types of items being stored – data-objects, editor-specifications and user-accounts – so the administration is pretty simple. Even simpler if we categorise editor-specifications as themselves a type of data-object, of course.)

          @Gene: “I could potentially see it also needing a capability for applying a ‘profile’”

          I can guess at what you mean there, but not sure I’ve got it right: expand on that ‘profile’-concept a bit, if you would? – thanks!

          • My idea of ‘profile’ and your ‘editor specifications’ might overlap…essentially I see a profile as a view/stereotype of a data object. For example, “Place Order” as a data object might have Archimate.BusinessService, Archimate.ApplicationService, EnterpriseCanvas.Service, SomeHypotheticalSOAModeler.Service, and UML.Class as profiles. Each profile would have a collection of attributes associated with it, some that are shared with other profiles, some that are unique.

          • This idea (http://t.co/MjnfBS4sqI) came to me based on the “Overview first, zoom and filter, then details-on-demand” link Tom tweeted yesterday. It breaks things into three separate types (Things, Attributes, and Links) rather than keeping everything in one type, but it is really conducive to the whole roll up/drill down idea. It also feels very amenable to the two types of persistence mechanisms we’ve talked about so far (SQL & NoSQL/Document-centric). Thoughts?

  2. I may have made the point elsewhere elsewhen, but your core data model concept could probably be met by RDF descriptions (perhaps with OWL constraints). The key RDF concept of “anyone can say anything aboud anything” sits well with your ideas. You would automatically get a choice of import/export formats – I favour Turtle simply because it is compact and readable. You would also have a choice of data stores and a low level query language in Sparql. A big challenge would be to map higher level visual concepts onto this storage, but there is already work in that area. I have managed to create passable Archimate diagrams this way.

    • Many thanks for this, Peter. You’re the second person to prod me about RDF/OWL in the past two days, hence clearly I need to look at it much more closely than I have done to date!

      In a sense, it’s obvious that I need to look elsewhere – otherwise there’s not so much a risk as a certainty that I’ll end up trying to reinvent a wheel that others have already done, far better than I could. What I wanted to do in this post was try to get a sense of how all the various parts would fit together, and what they each need to do. We’re still not there yet – not by quite a long way, to be honest – but I hope this is helping things to move a bit more in the direction that we need.

  3. Tom, as you’re looking for platforms for tooling, you might find this one interesting: http://www.marklogic.com/what-is-marklogic/marklogic-semantics/ I’m just starting to investigate that myself.

    I have a lot of thoughts on this whole topic, which go back to the business modeling tool we developed in Pacific Bell in the late ’80’s, with IDEA/AHA, the experimental semantic network dbms out of IBM Research that we used in a limited technology trial. All of that is much too long a story for here, but at some point it would be great to take a deep dive.

  4. Hi there,

    Can I suggest ORM (Object-Role Modeling) as an alternative to OWL. ORM is more semantically verbose than OWL (OWL semantics may be expressed in an ORM Model, but not all semantic objects in ORM may be expressed in OWL).

    I’m currently working with an ORM MetaModel that requires only 19 tables to express any MetaModel expressible in first-order logic (e.g. the MOF, UML’s Superstructure etc are expressible as first-order relations. i.e. can be stored in a relational model).

    Reading this blog entry, I can safely say that the ORM MetaModel can achieve most (if not all) of the requirements specified here, and the ORM MetaModel expressed as XML the same.

    There is a current open metamodel exchange standard being developed through the FBM Working Group (Fact-Based Modeling Working Group).
    http://www.factbasedmodeling.org/home.aspx

    kind regards
    Victor

    • Many thanks for that, Victor.

      As you’ll see from some of the other posts, I’ve already come to much the same conclusion around ORM versus OWL. The critical problem is around “can achieve most (if not all) of the requirements specified here”: if we don’t achieve all of the requirements, all we’d be doing is further dotting-the-joins – defeating the whole object of the exercise.

      It’s not about enforcing a single ‘solution’, but being able to support any type of ‘solution’. For comparison, the mote-structure can be implemented (in MongoDB, for example) as a single document-database ‘collection’ (i.e. schema-less ‘table’); an ORM Metamodel could then be applied to selected content of that collection. The crucial concept here is that the ‘things’ come first – not the metamodel.

      Remember too that much of the content we deal with in enterprise-architecture is ‘messy’, is not ‘fact’ as such, but instead often in an indeterminate state before some of it can be coalesced into ‘fact’. Some of it never will coalesce into ‘fact’ – for example, the blurriness of many of the inputs into sensemaking and decision-making – and we need to keep it as ‘pre-fact’ in order to be able to apply alternative perspectives to arrive at alternative sense, and thence refresh and review our sensemaking and decision-making over time. The moment we fix something as ‘fact’ is often the point at which we can start to exploit it as ‘fact’, yet is also the point at which we potentially trap ourselves into a situation in which we no longer have any alternatives available to us when the world changes around us – which is Not A Good Idea. We need to build an understanding of those trade-offs right into the very roots of what we’re building here: even with something as versatile as ORM, we run a risk of forcing our ‘solutions’ into too narrow a box for the needs of the real world. Makes a bit more sense now, I hope?

  5. Hi all,

    Perhaps this is what you are looking for. CQL is an initiative of Clifford Heath in Australia as part of Clifford’s Data Constellation convergence toolset.

    Remarkably, the Data Constallation logo looks much like a Mote diagram, and it becomes apparent why. Both Richmond (my research) and CQL are based on ORM (Object-Role Modeling).

    Natural Language (Human Readable) fulfilled: http://dataconstellation.com/ActiveFacts/examples/glossary/index.html#Comment

    What the exchange format would look like under CQL: http://dataconstellation.com/ActiveFacts/examples/CQL/Address.cql

    What the ORM equivalent of a Mote diagram would look like: http://dataconstellation.com/ActiveFacts/examples/images/Address.png

    Research in this area is well advanced. Combined, Clifford’s research and Richmond equate to many hundreds of thousands of lines of code to make it work.

    Both Clifford and I are contributing to the Fact-Based Modeling Exchange MetaModel, which will undoubtedly have an XML exchange format, so both Richmond and CQL will be able to share Models.

    Either way, may I suggest that ORM is definitely what you are looking for. ORM breaks the Universe Of Discourse (the ‘Enterprise’ say) into:

    Entity Types: Types of things, represented by instances of their Value Type.

    Value Types: Representing types of Values (e.g. Integer, Mime Type etc)

    Fact Types: Relationships between Entity Types, Value Types and Fact Types (e.g. Person has FirstName)

    Role Constraints: e.g. Person has at most one FirstName

    Facts: e.g. Person (‘123’) has FirstName (‘Peter’)

    Values: e.g. ‘Peter’

    These 6 concepts have been discussed here as what Tom calls ‘Motes’. I call them Concepts.

    More information about CQL:
    http://www.slideshare.net/cjheath/the-constellation-query-language
    http://www.slideshare.net/cjheath/semantic-modeling-a-query-language-for-the-21st-century?next_slideshow=1

    An academic paper on CQL:
    http://link.springer.com/chapter/10.1007/978-3-642-05290-3_84

    If you know how to use Ruby, you can start using CQL today by downloading CQL at:
    http://dataconstellation.com/ActiveFacts/download.html

    I hope this is helpful and gets a good start to what you wish to achieve.

    kind regards,
    Victor

    • Hi Victor – many thanks for all those links, and yes, I really do like CQL, and ORM, and the ideas you’ve described for your Richmond platform.

      Once again, though, what I’m asking you explore here is to pull back from those prepackaged ‘solutions’ – which, in effect, is what they are – and instead look at the total context-space that exists before you would apply such a ‘solution’. CQL, ORM and your platform (as I understand it so far?) all apply only when the ‘messiness’ has been stripped-out (or, perhaps more accurately, are each a means to strip out some of the last of the ‘messiness’?). In other words, they well be useful out at the far end of the Squiggle, and beyond into ‘fact’-based inferences and suchlike – but they can be dangerously misleading elsewhere.

      In development of enterprise-architectures and the like, we need to be able to work with the entirety of the Squiggle: not merely the ‘easy bits’ over at the far end, but all manner of proto-‘facts’ and probabilities and perhapses and flat-out-falsehoods-but-we-don’t-know-it-yets – in other words, a full modal-logic of possibility-and-necessity, all the way down to absolute-uncertainty, unpredictability and uniqueness, rather than solely the easy pseudo-certainties of true/false logics. CQL and ORM and OWL-style semantics are overlays onto the context-space, each an interpretation or filter that we apply to the context-space – not the context-space itself. Hence they need to come after the ‘things’ of the context-space – not before. The schema comes after the ‘thing’ – not before. And because we could use any number of schemas, the ‘thing’-description needs to be able to support any number of schemas – but again, the ‘thing’ needs to come first, independent of any schema. Unless we frame it that way round – ‘thing’-first, schema(s) after – then we would always be constrained by the limitations of the chose schema. Which would be Not A Good Idea – especially over at the starting-end of the Squiggle.

      In short, to make this work, we need to flip the usual thinking on its head: thing-first, not schema-first. I hope that makes a bit more sense?

  6. Hi Tom,

    I’m kinda with you on that and kinda not. More with than not.

    Perhaps there’s a small misunderstanding which I can clear up.

    Firstly, I’ll say that CQL seems a better candidate for the type of ‘human readable’ exchange format than the XML I sent you. So I’ll grant that. It took me a while to warm to CQL, but once I accepted that someone has gone to the effort of writing a parser to parse structured ‘English’ (or any romantic/non-ideomatic language really with a bit of work), then I could relax and accept it as something truly special.

    A small misunderstanding and in agreement
    =========================================

    Both Clifford’s metamodel for the CQL extended tools and my metamodel for Richmond have a base class of ‘Concept’. One could envisage a user just typing in Concepts like ‘Person’, ‘Account’, ‘Payment’, ‘Hardware’, ‘Vehicle’, etc and start to populate the far-left of the squiggle. In effect, the user starts to build the Universe Of Discourse (in our case, a model of the Enterprise). So, in that respect, I kinda disagree, ORM doesn’t have to be far-right-squiggle.

    I grant you that the examples of CQL in the links I provide don’t say ‘Person is a Concept’ as a starting point, which perhaps they could and should.

    In the XML that I sent you, I do do that by having a Model Dictionary, which just lists the name of each Concept in the Model.

    So, I disagree that an ORM model represents the far-right of the squiggle and requires the formation of a schema only.

    However, where I can see your point and would like to channel some thinking is that the moment a user creates a Concept (or Mote) that is a relationship between Concepts/Motes…that’s it…you’ve started to talk schemas. There really is no choice…either things have a relationship with other things (Motes with relations to other Motes), or they are not related to anything.

    Where you are correct (strictly) is that in ORM proper, it just isn’t done to say “Person”, and not have any relations to “Person”. But we can do what we like and still use the language syntax. That’s our prerogative.

    My thoughts are that the vision is sound: “A human readable exchange format [in structured language syntax]” that is parseable. Clifford has proved that to be possible.

  7. Hi Tom, all,

    I had a bit more of a think about it and can clear up another small misunderstanding.

    I’m suggesting that the ORM ‘MetaModel’ is what is important, rather than ORM schema. For the following reasons:

    1. “Thing first”: I agree, and without any schema drawn as a diagram, the ORM metamodel supports the kinds of Concept (Motes) described in these posts (Things, Relationships, Values, Value Types, Notes).

    e.g. In Richmond I have a GUI form called “Add Business Term” and it stores a business term as one of those things above (except ‘notes’).
    So that’s ‘thing first’ without a hint of a diagram.

    2. The ORM MetaModel allows for the far-left-squiggle to progress to far-right-squiggle by disambiguating “What are you associated with and why”…when Facts start to emerge as relations between things.
    So that’s Facts taken care of, and as an Enterprise Model moves towards far-right-squiggle.

    3. The ORM MetaModel allows any diagram type that can be drawn on paper to be stored in the ORM MetaModel as a MetaModel. The ORM MetaModel then becoming MetaMetaModel and fulfilling the OMG’s 4-Layer Architecture.

    4. The ORM MetaModel comes with its own Object Constraint Language…dispensing of the need for the OMG’s OCL

    5. ORM, and the ORM MetaModel, supports alethic and deontic constraints…so Model Logic is taken care of;

    6. I’ve written an SQL-like language to manipulate Models and MetaModels within the ORM MetaModel, more expressive than, and dispensing the need for, the OMG’s Query View Transform language;

    7. Clifford’s CQL (Constellation Query Language) is emitted from an ORM MetaModel, fulfilling the desire for a Natural Language based exchange format;

    So, the misunderstaning is that I’m promoting ORM Models/Diagrams…I’m not. What I’m promoting is the ORM MetaModel as the basis for all of the above, and one may use the ORM MetaModel without ever drawing an ORM Model/Diagram or schema. e.g. On could pick up Richmond and only ever use it for UML Diagrams.

    8. More advanced: Because the ORM-MetaModel is like the innards of a Relational Database Management System, one could use it to store any relational data (e.g. an inventory of hardware in the Enterprise, a glossary, a Personnel Directory etc).

    So, I hope that clarifies my position. I’m not promoting ORM Diagrams, I’m promoting the ORM MetaModel as a good way to achieve a basis for an EA toolset (I’ve bet my farm on it with Richmond, so to speak) and for use in delivering an Exchange format (such as the CQL).

    kind regards
    Victor

Leave a Reply

Your email address will not be published. Required fields are marked *

*