Monday, 3 March 2014

Systematic Literature Reviews

Literature reviews are the 'meat and potatoes' of academic research. In some disciplines the process of creating a review has become systematic, for example in Cochrane for medical research. In Software Engineering there has been an attempt to provide a process that is described and analysed here. A couple of years ago we (Balbir Barn, Franco Raimondi, Lalith Athappian and myself) were awarded some JISC funding to implement a tool to support the construction of an SLR. The first version of our SLR tool has attracted over 100 registered users from academic institutions  across the world. The process that is supported by the tool is based on Cochrane and is described in a up-coming paper in ICEIS 2014. The process is described below:


Our aim is to  create a tool that is 'model driven' in that users can create their own SLR process model and the tool will adapt to that. We have not reached that stage yet, but would be interested to get feedback from anyone who has used the current version of the tool to see how it could be improved.

Sunday, 22 December 2013

Computer Science: An Approach to Teaching and Learning

In 2012 we decided to completely review our Computer Science BSc. We are in the process of delivering the First Year of the degree at the moment. Our First Year aims are to:
  • Support each student to develop an appreciation of the key topics of the discipline through practical problem led holistic sessions that aim to reflect the way that CS occurs in the real world.
  • Integrate programming throughout the year and to use programming as a basis to engage with as many of the foundational aspects as possible.
  • Invert the locus of control for teaching and learning by allowing students to dictate the pace at which learning outcomes are demonstrated.
In order to support the First Year we made two decisions. The first was to use Racket as the programming technology. There were various reasons for this including the support Racket provides in terms of the Dr Racket tool, referential transparency, interaction through REPL, and the ability of Lisp-based languages to have a close association with foundational concepts such as sets, functions and data structures with the minimum of boiler-plate.

The second decision relates to course coherence and assessment. Under normal circumstances in the UK, a course is decomposed into a number of modules each of which  has its own syllabus and assessment processes. This imposes fire-walls around the different aspects of the overall year and makes it difficult to let the learning process naturally spread without over-assessment. For example, a First Year will normally contain modules for Architecture, Programming, Fundamentals etc. Topics such as graphs, events, or binary representation can occur as important features in any or all of these modules. If module apartheid is imposed then modules can tend to be overstuffed or there is a danger that topics might be missed. Furthermore, passing the year tends to require a minimum success in a range of assessments, usually 40% across the modules (typically with some minimum threshold in each). This raises the question: what does a student who has achieved 40% actually know? 40% of what

Our plan for the First Year is to introduce a sequence of examples, challenges, mini-projects, case-studies that provide an opportunity for multiple topics to be introduced and investigated and thereby  provide the opportunity for students to demonstrate the acquisition of knowledge and skills. Each of the key CS topics might occur multiple times throughout the year. A student is free (within reason) to choose how to do certain challenges and thereby when to demonstrate the learning.

Of course such a free-wheeling approach introduces a significant overhead regarding managing the learning portfolio of each student. This led to the second decision: the CS First Year is supported by a tool that manages Student Observable Behaviours (SOBs). The key topics are decomposed into a collection of SOBs that can be measured by a member of academic staff interacting with a student in a number of ways including labs, group sessions, observing presentations, or one-to-one tutorials. 

SOBs are tagged as: threshold, typical, or excellent. All essential foundational topics in CS are covered by threshold-level SOBs and to pass the year a student must have demonstrated all threshold SOBs. This ensures that the 40% problem described above does not arise: all student who pass the year have acquired a minimum knowledge of CS for all topics. The typical-level SOBs might introduce more in-depth or specialist knowledge and skills. Excellent-level SOBs provide an opportunity for students to demonstrate advanced standing through activities such as mini-projects.

Our SOB tool provides a platform for managing the student portfolios, but also has the benefit of providing a number of reports for both the students and the staff. Students can see their performance to date against what is expected (SOBs have demonstration time-windows that identify where the teaching team have planned opportunities for demonstration, although students are not tied to these) and against the rest of the cohort. Our experience to date is that this real-time reporting provides a beneficial incentive to maximise performance through competition within the cohort.

A screen-shot of the tool showing the current progress of students against threshold-level SOBs is shown below. The black line shows the expected progress. Actual progress is better than expected although most are around the 12-18 threshold SOB mark which is about right. The fact that there are quite a few ahead of the game is consistent with our experience that the approach has improved student engagement over previous versions of the CS First Year.


Here is a screen-shot of the typical and excellent level SOBs:

                                                       
The year is organised as a sequence of 3 blocks each of which has a different academic team. Each team organises labs and workshops around the SOB framework and sets a single holistic block-challenge. The first challenge uses an Arduino connected to Racket to build a collection of traffic lights. The second challenge is to use data structures in Racket to build a simple dungeon game. The third challenge is to build a robot using Racket and a Raspberry Pi.

The block 2 challenge handout is shown below. It uses a series of simple games implemented in Racket: dungeon_rooms.rktmoving_about.rktmonsters_attack.rktcommand_language.rktitems.rkt.




Although we are only half-way through the first run of our new design, it seems to be working. Student engagement is very good with many students determined to 'get ahead of the game' by demonstrating SOBs as early as possible. Although many of the excellent SOBs are designed to be very challenging, as shown above some students are keen to try them. The use of Racket has gone well, although the small number of textbooks for Racket, and the dry nature of the on-line documentation, is a problem for introductory-level students who want to go further through self-study. However, in practice this appears to be a very minor issue. 

The SOB-tool is working better than we could have expected. It has been developed in-house but is not tied to CS or any course in particular. We would be happy to chat with anyone who might be interested in using the SOB-tool or finding out more details about how the CS First Year has been designed and delivered.

Thursday, 25 April 2013

Call for Papers: Towards the Model Driven Organization


The First International Workshop
TowArds the Model DrIveN Organization 
29 Sept 2013
As part of the ACM/IEEE 16th International Conference on 
 Model Driven Engineering Languages and Systems (MODELS 2013)
Miami Florida USA 29 September 2013 through 4 October 2013

Overview

Modern organizations are faced with the very challenging problem of rapidly responding to continual external business pressures in order to sustain their competitiveness or to effectively perform mission-critical services. Difficulties arise because the continual evolution of systems and operational procedures that are performed in response to the external pressures eventually leads to suboptimal configurations of the systems and processes that drive the organization.

The management of continuous business change is complicated by the current lack of effective mechanisms for rapidly responding to multiple change drivers. The use of inadequate change management methods and technologies introduces accidental complexities that significantly drive up the cost, risk, and effort of making changes. These problems provide opportunities for developing and applying organization modeling approaches that seek to improve an organization's ability to effectively evolve in response to changes in its business environment. Modeling an organization to better support organizational evolution leads to what we call a Model Driven Organization (MDO), where an MDO is an organization in which models are the primary means for interacting with and evolving the systems that drive an organization.

DEF: A Model Driven Organization uses models in the analysis, design, simulation, delivery, operation, and maintenance of systems to address its strategic, tactical and operational needs and its relation to the wider environment.

An organization's Enterprise Systems (ES) support a wide-range of business activities including planning, business intelligence, operationalization, and reporting. ES are thus pivotal to a company's competitiveness. Modelling technologies and approaches that address the development, analysis, deployment and maintenance of ES have started to emerge. Such technologies and approaches must support a much broader collection of use-cases than traditional technologies for systems design modeling. Current ES architectures do not adequately address the growing demands for inter-organisational collaboration, flexibility and advanced decision support in organizations.

Realizing the MDO vision will require research that cross-cuts many areas, including research on enterprise architectures, business process. and workflow modeling, system requirements and design modeling, metamodeling, and models@runtime. This workshop seeks to bring together researchers and practitioners from a variety of MDD research domains to discuss the need, feasibility challenges and proposed realizations of aspects of the MDO vision.

The full-day workshop aims to provide a forum to report and discuss advances and current research questions in applying modelling technologies to organizations in order to substantially improve their flexibility and economics. The aim is to integrate various areas of research such as: models at runtime, (meta-) modelling, modelling tools, enterprise architecture, architecture modelling and business processes.  The workshop is a full-day and will include an invited speaker, paper presentations and a discussion on a research roadmap that will contribute to achieving Model Driven Organizations.

Scope

Submissions are solicited in areas that are related to this aim, and that address model-based approaches to the following non-exhaustive list of topics:
  • Frameworks for the Model Driven Organization.
  • Enterprise analysis including risk analysis and resource planning.
  • Stakeholder support through multiple perspectives.
  • Domain specific languages for enterprise modelling.
  • Patterns and best practice for enterprise modelling.
  • Modelling technologies for the Model Driven Organization.
  • Case studies.
  • Maturity models for the Model Driven Organization.
  • Enterprise simulation.
  • Enterprise-wide socio-technical issues.
  • Applying information systems theory to the Model Driven Organization.
  • Enterprise use-cases including:
    • Business change.
    • Regulatory compliance.
    • Mergers and acquisitions.
    • Business goal alignment.
    • Outsourcing.
    • Business intelligence.
Submissions must be in the scope of the workshop as described above. Submission process will be managed by Easychair: https://www.easychair.org/conferences/?conf=amino2013. All submissions will be required to conform to LNCS format: http://www.springer.de/comp/lncs/authors.html

Submissions are invited in the following categories:
  • research papers reporting on completed research activities. (15 pages + up to 2 pages for references).
  • short papers describing work in progress (8 pages + up to 2 pages for references).
  • position papers describing a new approach to a research question (8 pages + up to 2 pages for references).
  • case-study papers reporting on real-life case studies (8 pages + up to 2 pages for references).

Publication

Publication of the accepted workshop papers will be organised via the MODELS workshop chairs in a formal digital library. In addition the workshop organisers are planning to invite selected papers to be extended and submitted to a publication (via collections such as LNCS or LNBIP) of selected works describing research contributing to the aim of the Model Driven Organization. 

Registration

See  the MODELS 2013 web site http://modelsconference.org/ for registration.

Committees

Organizing Committee

Program Committee:

Important Dates

  • Workshop Paper Submission Deadline: 15 July 2013
  • Workshop Paper Notification to Authors: August 2013
  • Workshop Dates: 29 Sept 2013

Contacts

Contact the workshop organisers using: amino2013@CS.ColoState.EDU

Wednesday, 30 January 2013

DSL Engineering

Domain Specific Languages and Language Based Software Engineering are important approaches and I would argue that Systems Engineering is a form of Language Engineering whether developers realise that they are doing it or not. Markus Völter has published a new book about DSL Engineering that should be essential reading for anyone who wants a thorough grounding in this subject and some of the supporting technologies.

A Cure for Death by Powerpoint

Franco Raimondi has blogged about The Nomadic Board which is an approach to make teaching and learning much more interactive. Linking technologies together as described is a great way to support problem driven teaching and learning where lecturer and student work together to develop the slides during the session.

Tuesday, 11 December 2012

Model Driven Organisations

My recent work has been investigating approaches to model aspects of an organisation. This was motivated a while back by a presentation on Enterprise Architecture (EA) that I attended and being frustrated with the lack of precision offered by current approaches and the large number of different concepts involved. Work has evolved in two directions.

Firstly, we developed the language LEAP as an executable component-based modelling language based on the hypothesis that many of the features found in current EA modelling languages and analysis processes can be reduced to a small collection of concepts. Current LEAP work aims to integrate intentional aspects of goal-based languages such as KAOS and i* into components.

The second direction addresses the problems that are faced by a modern organisation in terms of its complexity. It is rare for any single individual to have a clear understanding of its information, IT systems, business context and processes. This makes an organisation difficult to manage and maintain. Issues such as regulatory compliance, mergers and acquisitions, outsourcing, etc., can easily get bogged down in detail. Colleagues Balbir BarnRobert France, Ulrich FrankVinay Kulkarni and I have proposed the idea of the Model Driven Organization (MDO) to help address these issues. The idea is that aspects of a business can be modelled and the models can be used to support key EA issues. Models are good at abstracting from implementation detail which makes it easier to perform key analyses and to replace specific implementation platforms. Models can be sliced and presented to different stakeholders in domain-specific ways making it easier for them to understand how an organisation operates without being a technology specialist.

Taking this idea to its limit, all aspects of an organisation could be modelled and the organisation could be run directly from the models; changing the model will directly affect the organisation. What would need to be modelled? The diagram below presents some of the features that a language for MDO would need to offer:


The MDO provides a challenging application domain for model-based engineering research. 

Thursday, 23 February 2012

The Index of the Interesting

Roel Wieringa pointed me at this article that describes why theories are interesting. I found the advice on how to structure a research article and the different categories of interesting-ness to be fascinating and useful. 

Indian Fawlty Towers

I am currently in India at the ISEC 2012 Conference held at the IIT in Kanpur. The IIT is one of the most prestigious in India and has an impressive campus outside the city centre. Kanpur is famous for the massacre of English soldiers and civilians that occurred as part of the Indian Rebellion of 1857. Apart from that, the city seems to be a bit of a dust-bowl.

India is a delightful place to visit and I would recommend the ISEC conference which has a great mix of delegates from Indian Universities and IT Companies. However, one has to be careful in terms of hotels. The very excellent Taj chain of hotels provides a level of accommodation and service that exceeds more expensive hotels in the UK. Unfortunately, Kanpur did not have a Taj hotel and we decided to pass on the opportunity to stay at the IIT, put off by the name 'Visitors Hostel' that turned out to be a delightful oasis of calm, and instead opted for what we thought would be a Taj equivalent. Amazingly, we appear to have found the equivalent of Fawlty Towers right here in India. The glory days of the rooms and facilities were decades ago and the staff seem to have been trained by Basil himself.

Conference presentation is below.


Thursday, 3 November 2011

Programming: Sheep and Goats

Prof Richard Bornat and Saeed Dehnad gave a fascinating EIS seminar yesterday on New Insights into Learning and Teaching Programming. Here is the abstract:
Historically a high proportion (round 30%, sometimes up to 50%) of novice programmers fails to learn to program, and the level of achievement amongst the successful is often disappointing. Until recently the reasons for this miserable state of affairs were mysterious. Now we have some insight, we have a test which reveals important differences between novices, and two potential explanations. The gloomy explanation is that there is a 'geek gene'; even if that is true, we may be able to identify the geeks. The hopeful explanation is that there is something peculiar about programming courses, which elevates difficulties into show-stopping obstacles; if true, we have at last identified one such obstacle, we can hope to identify others, we can diagnose those who are stuck and we may at last be able to do something about it.
They have devised some tests to be given to first year students before and during a programming course. The tests are based on asking students to interpret programs and visual representations of system states in a way that identifies students that can induce the rules of a machine. The test identifies those that cannot work out the rules, those that can work out a consistent set of rules that happen to be wrong, and those that get the correct set of rules.

I find this very persuasive since I noticed a change in my own programming abilities after being introduced to the SECD machine over 20 years ago. The SECD machine is for a particular language, but I hold to the principle that expert programmers have a facility for working with algebraic representations of system executions  on a variety of abstraction levels. Essentially, such machines support a data representation of current states, and both past and future actual and possible system executions. That is to be contrasted with axiomatic and denotational semantics that seem less oriented to humans than to mathematics, and with other forms of operational semantics such as SOS or program interpreters where complete executions are not conveniently modularized.

The results of Bornat and Dehnad are also interesting because they can explain the often observed student learner division between sheep (those that can program) and goats (those that cannot). The claim is that introductory programming courses build topic on topic and that any programme of learning that has this feature will exhibit polarity in student results. Whilst offering no solution, it was observed that organizing programming courses for mixed ability might solve the problem of losing students at the initial hurdles.
 

Sunday, 9 October 2011

Presentation on Model Driven CARA

Last week I presented a School Research Seminar on Model Driven Context Aware Reactive Applications. The seminar generated some interesting questions, particularly about proving security and privacy aspects of applications given a precise representation such as that supported by the approach described in the slides below. This is something I had not really thought about, but is an important feature of mobile applications. Another important area is the incorporation of product lines into this approach. Static variability occurs because of the large number of (versions of) platforms and dynamic variability occurs due to changes in context (battery levels and location for example).

Saturday, 1 October 2011

Attended a workshop yesterday on Composition and Evolution of Model Transformations at King's College, London organized by Kevin Lano and Steffen Zschaler. The presentations covered a wide range of topics including: composition of bidirectional model transformations; traceability issues in the composition of input-destructive model transformations; transformation reuse; composing ATL transformations; verification logics for transformations; visualization of transformation traces; the use of Java agents for model transformations;  specifying transformations for model slicing; composition of UML-RSDS transformations. My presentation (and associated paper) was on model slicing:

Sunday, 18 September 2011

Model Driven Context Aware Reactive Applications

The explosion of smartphones and tablets has created a demand for fairly simple applications that are essentially driven by user events and that have some element of context-awareness. Unfortunately, the complexity of  technology used to develop such applications is not commensurate with the complexity of the resulting app. Therefore, this is an area that is ripe for Model Driven Engineering. There are currently a number of ongoing attempts at developing modelling languages and code generation in this domain, however many are complex, platform specific and/or incomplete. I am currently working on an approach that uses UML-style class and state diagrams to capture the structure and big-picture behaviour of such applications and uses a typed functional calculus equipped with essential context-aware reactive features to fill in the detail. The rest of this post describes the models, I'll post the calculus next time.

The diagram on the left shows Tony's mobile phone.
Clicking on 'add' allows the current address book to be viewed and edited as shown on the right
The owner is altered when a phone that occurs in the address book comes into range.

There are several general features that occur in the application: hierarchical GUI; user-events; context-events; state-changes. These can be captured on stereotyped UML class diagrams and state machines.

 The model on the left shows the main screen. there is a single root class called Main. Classes labelled external are part of a user-defined library that must conform to a given interface and that raise events. Classes labelled widget are user defined and can handle (via handler), raise events (via event) and perform commands (via command). Events are processed according to a containment hierarchy. The main screen is shown in the screen-shot above.
The model on the right shows the screen that is created when the 'add' button is pressed. The add screen is shown in the screen-shot above. It uses external widgets to handle the display and editing of lists of contacts. Notics the link back to the main screen that can be used when the back button is pressed. The delete screen is the same as the add screen.
The model on the left shows the screen that is created when the owner is notified of a contact that has come into range.
The behaviour is specified using a state machine where the states correspond to the root classes of the other models. The actions on the transitions are commands and transitions are fired in response to events (user or context).








Tuesday, 14 June 2011

SoSym Theme Issue: Enterprise Modelling


Modern organizations rely on complex configurations of distributed IT systems that implement key business processes, provide databases, data warehousing, and business intelligence. The current business environment requires organizations to comply with a range of externally defined regulations such as Sarbanes-Oxley and BASEL II
Organizations need to be increasingly agile, robust, and be able to react to complex events, possibly in terms of dynamic reconfiguration.
In order to satisfy these complex requirements, large organizations are increasingly using Enterprise Modelling (EM) technologies to analyze their business units, processes, resources and IT systems, and to show how these elements satisfy the goals of the business. EM describes all aspects of the construction and analysis of organizational models and supports enterprise use cases including:
  • Business Alignment: elements of a business are shown to meet its goals.
  • Business Change Management: as-is and to-be models are used to plan how a business is to be changed.
  • Governance and Compliance: models are used to show that processes are in place to comply with regulations.
  • Acquisitions and Mergers: models are used to analyze the effect of combining two or more businesses.
  • Enterprise Resource Planning: models are used to analyze the use of resources within a business and to show that given quality criteria are achieved.
Emerging technologies, methods and techniques currently proposed for EM include:
  • Modelling Languages: including UML; SysML; ArchiMate; MODAF; TOGAF.
  • Enterprise Views: stakeholder identification; multiple linguistic communities.
  • Enterprise Patterns: an organization is shown to conform to general (possibly executable) organizational principles.
  • Event Driven Architectures: constructing enterprise architectures based on complex events.
  • Enterprise Simulation: executing configurations of organizational units in order to analyse and verify performance.
The Journal of Software and Systems Modeling (SoSyM) invites original, high-quality submissions for its theme issue on Enterprise Modelling  (EM). The aim of the theme issue is to bring together a collection of articles that describe a range of EM technologies and approaches in order to provide the reader with a single resource that captures the state of art. The theme issue will include an introduction to the field, an overview of the leading-edge languages and technologies used to undertake EM, and in-depth analysis of techniques or approaches for specific use-cases of EM.

Papers must be written in a scientifically rigorous manner with adequate references to related work.

Submitted papers must not be simultaneously submitted in an extended form or in a shortened form to other journals or conferences. It is however possible to submit extended versions of previously published work if less than 75% of the content already appeared in a non-journal publication, or less than 40% in a journal publication. Please see the SoSyM Policy Statement on Plagiarism for further conditions.

Submitted papers do not need to adhere to a particular format or page limit, but should be prepared using font “Times New Roman” with a font size no smaller than 11 pt, and with 1.5 line spacing. Please consult the SoSyM author information for submitting papers.

Each paper will be reviewed by at least three reviewers. 

Important Dates: 
  • Intent to submit : 01 Sep  11
  • Paper submission:  01 Nov 11
  • Notification: 01 Feb  12
  • Publication: 2012 
 Follow this link for more information.

    Saturday, 11 June 2011

    Magic Gopher

    My father pointed me at this British Council site (shown on the right). The games are used for learning English however my children are amazed that the Gopher can read their mind and gets the answer right every time.

    Monday, 6 June 2011

    XMF Source Code

    The source code of XMF is now available for browsing via my home page. Look under XMF -> Source Code where there are two browser trees labelled com.ceteva.xmf.machine and com.ceteva.xmf.system. The first contains the source code of the XMF VM (in Java) and the second contains the source code of XMF (in XMF under xmf-src). Note that the Java source viewer will only work when navigating from a file within the source tree due to browser security issues.

    The browser trees were created by running the following XMF code over a root directory:
    context Root
     @Operation list(out,root,prefix,indent)
      let path = root + "/" + prefix then
          dir = Directory(path,Seq{".*xmf",".*java",".*txt"})
      in dir.build(1);
         format(out,"~V<li><a href='#'>~S</a>~%",Seq{indent,dir.name.splitBy("/",0,0)->last});
         format(out,"~V<ul>~%",Seq{indent+2});
         @For x in dir.contents() do
          let name = x.name.splitBy("/",0,0)->last
          in if not Set{".svn","META-INF"}->includes(name)
             then
              if x.isKindOf(Directory)
              then list(out,root,prefix + "/" + name ,indent+2)
              else 
               if name.hasSuffix("java")
               then format(out,"~V<li class='item'><a onclick='display_java('~S');'>~S</a></li>~%",Seq{indent+2,prefix+"/"+name,name})
               else
                format(out,"~V<li class='item'><a href='~S' target='FILES'>~S</a></li>~%",Seq{indent+2,prefix+"/"+name,name})
               end
              end
             end
         end
        end;
        format(out,"~V</ul>~%~V</li>~%",Seq{indent+2,indent})    
      end
     end
    

    Wednesday, 25 May 2011

    XMF and XModeler

    Both XMF and XModeler are now available from my home page. Click on the links to the left to get instructions for download and for documentation. XMF is a language for developing Domain Specific Languages and for Language Oriented Programming. XModeler is an IDE for Model Driven Engineering and for developing XMF programs.

    The download of XMF includes the source code. Since XMF is written in itself (on a small VM written in Java), this is an excellent place to start to see what you can do with the language. XMF supports both functional and object-oriented programming. Classes in XMF have optional grammars that can be used to create syntax-classes that extend the base language. XMF includes features for pattern matching, processing XML, writing prolog-style rules over object structures, threads, daemons, quasi-quotes for processing syntax. Virtually all aspects of the language are open for extension and reflection.

    The download of XModeler includes the source code (the actual sources will follow later) so you can browse through the implementation using the various editors. XModeler is written in XMF.

    Tuesday, 10 May 2011

    XPL: A Functional Language for Language Oriented Programming

    XPL is a functional language that has been developed to experiment with Domain Specific Languages and Language Oriented Programming.  It is written in Java. The source code for XPL v 0.1 can be downloaded here. The language has first-class grammars that can be combined and has access to its own abstract syntax. Grammars use quasi-quotes to build new syntax structures that can be inserted into the XPL execution stream. This is like macros in Scheme (in that language features can be defined with a limited scope). However unlike Scheme XPL can define the syntax of each new language feature.

    Here is a simple language feature inspired by an example from Martin Fowler. Suppose that we get a stream of character codes as input. The stream contains information about customers and we need to chop up the input to produce data records. If there are a large number of different types of input data and they change regularly then it makes sense to define a declarative language construct that defines each type. The following XPL code defines a language construct for recovering structure from an input stream:
    export test1,test2
    
    // We need syntax constructors for Record and Field:
    
    import 'src/xpl/exp.xpl'
    
    // We need list operations:
    //   take([1,2,3],2) = [1,2]
    //   drop([1,2,3],2) = [3]
    //   foldr(f,g,b,[1,2,3]) = g(f(1),g(f(2),g(f(3),b)))
    
    import 'src/xpl/lists.xpl'
    
    // Define some functions to be used as args to foldr:
    
    combine(left,right) = [| fun(l) ${left}(l,fun(r,l) r + ${right}(l)) |]
       id(x) = x
       empty = [| fun(l) {} |]
    
    // Define a function that constructs *the syntax* of a field extractor:
    //   extractor('name',5) = [| fun(l,k) k({name=asString(take(l,5))},drop(l,5)) |]
    
    extractor(n,i) = 
      let record = Record([Field(n,[| asString(take(l,${i})) |])]) 
      in [| fun(l,k) k(${record},drop(l,${i})) |]
    
    // Define the grammar to consists of a sequence of fields.
    // Each field builds an extractor. All extractors are combined
    // into a mapping from a sequence of character codes to a record:
    
    grammar = {
      fields -> fs=field* { foldr(id,combine,empty,fs) };
      field -> n=name whitespace ':' i=int { extractor(n,i) };
      int -> whitespace n=numeric+ { Int(asInt(n)) };
      whitespace -> (32 | 10 | 9 | 13)*;  
      name -> whitespace l=alpha ls=alpha* { asString(l:ls) }; 
      alpha -> ['a','z'];
      numeric -> ['0','9']
    }
    
    // Here is a use of the language: a customer is a name (5 chars) followed
    // by an address (15 chars), followed by an account number (3 chars):
    
    customer = 
      intern grammar {
        customer:5
        address:15
        account:3
      }
    
    // The customer map is used by applying it to a stream of char codes:
    
    input = [102,114,101,100,32,49,48,32,77,97,105,110,32,82,111,97,100,32,32,32,53,48,49]
     
    test1() = customer(input)
    
    // Just to show everything is first-class:
    
    test2() = map(customer,repeat(input,10))
    
    To use XPL you download the ZIP file (link above). It is developed as an Eclipse project, but can also be run stand-alone. The interpreter is in the xpl package in the source folder. If you run the Interpreter as a Java application in Eclipse then the console becomes an XPL top-level loop that you can type XPL commands to. Here is a transcript of the example given above (user input after a '>' followed by XPL output):
    [src/xpl/xpl.xpl 2353 ms,136]
    > import 'src/xpl/split.xpl';
    [src/xpl/split.xpl 179 ms,262]
    [src/xpl/exp.xpl 26 ms,404]
    [src/xpl/lists.xpl 108 ms,164]
    > test1();
    {customer=fred ;address=10 Main Road   ;account=501}
    > test2();
    [{customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501},
     {customer=fred ;address=10 Main Road   ;account=501}]
    > 
    There is currently no user documentation (contact me if you are interested in this). But there are some technical articles: language modules in XPL, modular interpreters in XPL, and parsing infix operators using XPL. In addition the source code contains a number of examples in the xpl folder.

    Thursday, 17 March 2011

    Class Modelling

    System modelling is something that students often find difficult to do. One possible reason for this is that they have difficulty mapping the models to a framework that can be used to validate their design, i.e. what does the model do? Whilst modelling should be more abstract than programming in pictures, I think that grounding the models in an implementation language, at least initially, is a good place to start.

    To that end, a while ago I produced and delivered some material that linked class modelling with Java implementations.This was quite well received and can be downloaded as a zip file. The application is a small hotel booking system whose model is shown on the right. The material includes the Java code (as an Eclipse project) of a basic booking system, some slides, some student activities to extend the models and implementation, and the solutions. The models contained in the material were created using the open-source modelling tool called StarUML.

    A nice feature of the implementation is that it can print out the state ofthe booking system as an XML document. This means that students can understand state changes in terms of pre and post-states of the system. A natural extension of the material would be to introduce pre and post conditions that can be articulated in terms of the XML system states.

    Monday, 14 March 2011

    Arrived back from a three week tour of India. We visited many companies and HE institutions to discuss collaboration in  Thiruvananthapuram (Trivandrum), Bangalore, Chennai, Mumbai, Pune, and New Delhi. The scale of operations of IT companies is huge. One commercial campus we visited has around 25 thousand employees; others are even larger.

    Friday, 25 February 2011

    ISEC 2011

    At the ISEC 2011 Conference this week in Thiruvananthapuram. The conference is held at TCS offices and the entrance lobby was decorated with flower petals as shown on the right. Someone posted photos (of a session I was not at).


    Lots of interest in development methods at the conference, particularly Agile methods and Scrum. I gave an invited talk on Domain Specific Modelling as Theory Building (see entry below) at the Advances in Model-based Software Engineering Workshop and gave a paper on Model Based Enterprise Architecture at the conference.

    Wednesday, 8 December 2010

    Logic

    The BBC Radio 4 weekly discussion programme In Our Time recently covered the history of Logic from Socrates through to computation with Alan Turing. It is an interesting programme and well worth a listen. Necessarily broad in scope but more accessible than the recent hilarious attempt by the same programme to deal with imaginary numbers during which the presenter, Melvyn Bragg, tied himself in knots: 'yes... but what are they?'

    Tuesday, 2 November 2010

    Don't Monkey Around with your Mobile

    Dean Kramer tweeted the following method as part of the Android library:

    Modelling as Theory Building

    Balbir Barn pointed me to an old paper by Peter Naur: Programming as theory building in the Journal of Microprocessing and Microprogramming, 15(5):253 – 261, 1985. Naur proposes that program designers should explicitly build theories of an application that address all features of the domain that are related to the desired behaviour, whether the features have any direct counterpart in the eventual implementation or not. The theory is implemented by mapping it to a target platform. In doing so, the theory faithfully represents what would now be called the problem domain and represents the key development artifact that can be used to understand and maintain the application. This paper is very relevant to the activities of the DSM and DSL communities. In particular DSM development (for example profiles in UML) rarely pays attention to the semantics (the theory) of the language being defined.

    Sunday, 10 October 2010

    OCL and Textual Modelling Workshop

    The OCL and Textual Modelling Workshop at MODELS 2010 was well attended and we had excellent presentations. The day concluded with a review of the current state of the OCL standard maintained by the OMG and a discussion about features that OCL could include in the future. Jordi has blogged an overview of the presentations.

    Saturday, 18 September 2010

    Tom Gilb Keynote

    I attended the Keynote Presentation by Tom Gilb at the International Workshop on Requirements Analysis (IWRA 2010) at Middlesex University. Tom's Keynote, entitled What's Wrong with Requirements Methods included many examples of software projects with scarily large budgets where the requirements were vague and focus on software form and function rather than concrete measurable business drivers. He offered a definition of Requirement: Stakeholder Valued End-State and associated decomposition onto a hierarchy of requirement-types that is intended to be more useful than the ubiquitous functional and non-functional categories of requirements. Tom's recent work includes the requirements engineering language Planguage and the associated book Competitive Engineering.

    Sunday, 12 September 2010

    Exam Howlers

    This week the Times Higher Education magazine published some US responses to UK Exam Howlers. A personal favourite from the UK:
    [A tutor] was asked for a reference via the following message: "Will you please be a referee for a job for which I am appalling?"
    And from the US:
    [A] lecturer cited a student's scathing observtion that "Prof seems to think that he knows more than the students."
    Reminds me of an eminent UK Professor who recalled a student's query during a lecture:
    "Sir, are you making all this stuff up?"

    Saturday, 4 September 2010

    IEEE Software Special Issue: Multiparadigm Programming

    Dean Wampler and I have guest edited the recent issue of IEEE Software on Multiparadigm Programming. From the guest editors' introduction:

    Programming languages, frameworks, and platforms require the developer to use a collection of provided programming features—abstractions—to express data, implement desired calculations, interact with other technologies, create user interfaces, and so on. A collection of coherent, often ideologically or theoretically based abstractions constitutes a programming paradigm. Often, a given programming technology is based on one particular paradigm.
    Well-known examples include object-oriented, relational, functional, constraint-based, theorem-proving, concurrent, imperative, and declarative. Less well-known (or perhaps less well-defined) examples include graphical, reflective, context-aware, rule-based, and agent-oriented.
    A particular paradigm leads to a specific type of implementation style and is best suited to certain types of applications. Relational programming benefits information-rich applications, whereas imperative programming is commonly used to control hardware. But today's applications are seldom homogeneous. They are frequently complex systems, made up of many subcomponents that require a mixture of technologies. Thus, using just one language technology and paradigm is becoming much less common, replaced by multiparadigm programming in which the heterogeneous application consists of several subcomponents, each implemented with an appropriate paradigm and able to communicate with other subcomponents implemented with a different paradigm. When more than one language is used, we call this polyglot ("many tongues") programming.

    Tuesday, 10 August 2010

    Defunctionalization

    Olivier Danvy's paper describing a rational deconstruction of the SECD machine is very interesting. It observes that if a machine uses lambda-abstractions to delay computations then this is equivalent, through a process called defunctionalization, to the use of stateful machine instructions and a separate auxiliary top-level interpreter. He uses this process to 'discover' the machine instructions of SECD. Effectively the free variables in the closures that are created in order to delay the computations are captured in data structures (the stateful instructions) that are subsequently fed to top-level functions. Defunctionalization was discovered by John Reynolds and, in a way, shows that higher-order functions are not fundamental to computation (a theory advocated  by Joseph Goguen). However, writing programs in a defunctionalized style would be a real pain because all lambdas would have to lifted to the top-level. So perhaps higher-order functions are fundamental to programming after all.

    Thursday, 29 July 2010

    Lots of Tiny Languages - All Different

    The MIT Technology Review reports on the Emerging Language Camp. This looks like an indication of things to come. The article makes an interesting point that existing mainstream languages were designed for computational architectures that are rapidly becoming outdated. Software systems  are no longer based on single-processors, single-heaps, and reliable in-core execution paradigms. There seems to be increasing interest in languages that are not exclusively based around the all-conquering OO message-and-state mechanisms that emerged in the 80's. Part of the change seems to be driven by the rise of mobile devices and new styles of device interface. Is the era of the 'big language' (C++, Java) over?