IMS Curriculum Standards Workshop
Originally posted on Half an Hour, July 17, 2009.
More blog summary from IMS-2009 meetings in Montreal
Overview of Common Cartridge
(same talk as the previous two days)
Achievement Standards Network
Diny Golder (presenting)
An achievement standard "is an assertion by someone in authority..."
The domains we want to cross are geographc domains, grade-level domains, and work domains. So when we use the term it could be curriculum standards, content standards, etc. ASN is a data model for these assertions.
Specifically, the goal when we created the ASM, it came out of a lot of research, dealing with the domain of cyber-learning, with the ways one learns in a digital world. The goals look forward, not just the existing needs of the standards bodies, but also to enable a global distribution model, so anyone can play in this field. This is very similar to the way we live in the world of paper libraries, and what a catalogue record is about. ASN standard representations that are licensed under Creative Commons.
The global interoperability - standards data can be referenced with URIs.
Australia: resource integration through achievements standards. Being used for Australia and also countries they world in. In Australia, they are developing a curriculum, but also a national standards system. Parts include personal learning, reporting, student portfolio, etc.
The idea of sharing and collecting resources to teach is not a new idea, but it's fairly new in the environment of traditional textbook publishers. Example: Thinkfinity, from National Geographic, Smithsonian, etc. They have correlated resources using ASN, and Thinkfinity pulls them together, Similarly, TeachEngineering correlated resources to standards for all 50 states. Also, WGBH does education programming for PBS, and these are coorelated to standards for the 50 states.
The Michigen eLibrary page is an example of different views of the standards; this (demo) is their cataloguing tool. They get resources from hundreds of courses, and they do not include correlation natively, but a cataloguer correlates it. The cataloguer correlates to a level. The user can browse via the standards and choose a subject, and they have an indicator that a resource has been correlated to that 'statement' at that level (a 'statement' is a competency assertion, generally found within a state standard).
(Yvonne from D2L presenting)
At D2L we have integrated this into the system. Basically we have taken the ASN compencies list, converted it, and integrated it into the tool. Hence, we have learning activities tied to the curriculum standards. So teachers and designers can tie their materials to competencies in the ASN.
In our learning repository tool, we embed the taxonomy information into the metadata. The publishing of the material will allow the material to be aligned to the stadnards. You can browse through the repository by competency and grab content specifically correlated to that learning objective. Or if you look at content, the system will return any compency associated with that content.
The ePortfolios tool - it's really important to be able to look at the different levels of competence and how students will achieve those. You can take those competences and publish them to the ePortfolio. You can then look at that and see which of the competencies you've accomplished. And maybe add more during the course of the year.
Competencies can be added to eportfolio presentations, tagged, added to collections, shared, reflected upon, etc. (demo - competency reflection).
Click & Learn - another example. They feel one of the things they offer to subscribers is an easy way to browse the collection by the standards. They have web services that go out and pull the data live from our databases. They are indifferent as to whether the definition of the standard has changed. None of this is stored locally. (Standards change only in one way: they get new ones. Old data never goes away. Resources remain related to statements.)
Looking at ISO-MLR compared to ASN. We use the Dub,in cor for the well-defined abstract model. We use globally unique identiiers that are dereferencable over the web - URIs. Every node in the standards document has its own URI. When it's resolved, what is returned is the taxon path all the way up to the standards document.
When we talk of standards, what we normally think of as text blocks. But behind that, every statement has a whole set of metadata that systems find very useful. That scema is Dublin Core, and it is extensible (we found Australia needed to do this). So there is the Australian application profile of the ASN, the US application profile, and we expect many more.
The ASN model basically starts with an achievement document representation. We do not duplicate the standard, we create a representation of the standard. So we create an achievement document representation that in RDF consists of a set of nodes and arcs. There are two primary entities, the standards document itself, and the nodes, which are individual assertions (statements) and arcs that define relations across the nodes (nothing says it has to be hierarchies) - this allows new relations to be extablished between the nodes as they emerge in practice.
text needs to be there to satisfy umans, but they are inherently ambiguous. But we always identify text with URIs, because just because text strings match, they may be in different hierarchies, have different meanings, so they have to have their own URIs. Behind the text is a rich set of descriptive metadata, of which the text is one piece. All the rest of these properties (about 57 elements) apply to it. Eg. education level (in Bloom's taxonomy), jurisdiction (where it's from), etc.
This metadata is extremely useful for a publisher who is creating a correlating point for a resource. These elements, when the standards statement has rich metadata, will help you make those correlation points to those resources that have rich metadata. If it doesn't fit this sort of correlation, you can exclude it from consideration. But if you have, say, a history, and you are on a timeline, you can use the spatial and temporal aspects to apply it to the timeline.
The metadata is created by the people who are creating the descriptions of the standards. It could be us, or it could be the standard authors - the Australians are doing it themselves.
(Comment: if you have tools that depend on there being alignments, thy have to be there - you can't depend on them and then have them not filled out)
Reply: we follow the general DC principle of 'optional'. It's up to application profiles to determine within the domain. For example, the Australians depend on what is required or not.
The list have come out of a lot of bodies that have come out of a lot of organizations coming out with 'national exemplary statements'
When a taxon path is returned, all the metadata comes with i, not just the text.
3rd Party Derived Statements
Our goal in the last five years has been to support as many environment as we can. The 3rd party derived statements tend to be where the statement in the standard is not granular enough, and the publisher needs it to be more granular, and such 3rd parties can create derived statements and lock it into the ASN model. That doesn't mean that's the wisest thing to do, just that it can be done. Just consider this a framing of the issue - that we can support third party assertions.
For example, here you see a very simpl taxon path (image). It's a simply hierarchy, each level is an entity, and behind each entity is a metadata description. This is from Ohio, it's fairly shallow, only three levels deep, and if you get down to this node (bottom) you will see that there are some 60 competencies here, and you may want to tst to only part of it. You may say, 'my testing corresponds to this subset of this' and this subset is a derived statement.
Derived statements are 3rd party statements that *refine* the original statement. They restrict it in some way. As long as it's a legitimate refinement, the datamodel will handle it. So the derived statement will reference back to the canonical node it was derived from.
Example: derived statement with a URI pointing to its own domain, say, test.com. When you hit one of these, you have these options:
- you can discard it ("I don't speak test.com" - not recommended)
- you can use the correlation from a trusted source
- you can generalize the correlation (to "dumb it down") which is the same as discarding the correlation, but you apply the correlation to higher in the taxon path (this is what you would do if you don't trust the source)
The point is that the model supports 3rd party assertions, and they can be locked into the canonical structure, and they have meaning and context. What you do with them, though, is completely up to you.
These are useful when you have a statement that s so complicated no resource in the world could do all these things, so you infer from this to ten different derived statements.
We has distributed a couple of research papers, one o which adresses the question of 'strength of fit', eg., "the statement is broader than the resource", "the resource is broader than the statement", so we're exploring this.
So you can have a 'strength of fit' threshold.
If we were to standardize, from a CC perspective, to what level would we use the taxonomy, the canonical, or to the derived?
That's the question before us, probably the major question, about what you do in Common Cartridge.
Here (slide) is a set of derived statements from Pearson. They result from dropping 'parentheticals' (actually, limiting clauses of the statement) or splitting lists.
Tools and Services
Current tools and services that we (ASN) offer:
- batch downloads of standards, which are freely available
- mechanisms to dereference an ASN URI (to get the taxon path, no logins, nothing)
- web services (APIs) that interact with metadata generation tools (no API key required)
- searching and bowsing interface within ASN
(service of slides dhowing the services)
We encourage other parties to create rich services aroun d it; we are research organization and will not be devloping those services.
Within the U.S., NSF funds the gathering of all current and histoical standards (761 of them), which have decomposed (atomized) into "assertions" (RDF triples).
- break -
Standards Meta-Tagging Within the K12 Common Cartridge - Issues and Options
Mark Doherty (Academic Benchmarks) presenting
The goal here is to look at the issues and options involved in standards and metatagging (our clients and us at Academic benchmarks).
Academic benchmarks provids th numbering system for the K-12 standards metadata for the content providers. Headquartered in Cincinatti. Founded in late 2003 and was focused on B2B provision. In summer of 2009 launched an http://academicbenchmarks.org site to extend outreach to support teachers and educators abnd researchers, and published the entire document collection. Also some reports and surveys. The .org is free for all.
In the database there are 1.7 million records that reflect state, local or international standards. These are updated constantly (like painting the Golden Gate). AB GUID network with 175 clients in the numbering system. Also new clients from the open curriculum initiatives: currciuculum pathway, curricki, etc.
We started off with some tenets that will drive forward. We have a common and complimentary challenge with IMS. There is a weakness in the K-12 education market, that is stopping innovation and costing money. That weakness is a lack of a common method to communicate content and a growing set of metadata, There is a lack of clarity on the roles each group can play at various levels for ultimate success. There needs to be an element of practicality, of flexibility, or pragmatism. That is the approach we have taken to the marketplace.
Formula: technical model + business model + adoption = successful model. I think each of us in this room operate their businesses on this formula.
The challenge here is to serve districts with different tools, products and systems, each with a different technical approach and varying functional components. There is a need for flexibility. The AB response is the AB number, the AB GUID (the number associated with the academic standard), which is delivered to customers. That provides uniqueness for the standard. The GUID is the absolute center of what we do. We don't concern outrselves with the format - that's just the delivery mechanism - the number is key. The format may be AB XML, SIF XML, CSV, XLS, or some custom format. Common Cartridge K-12 is a pending format.
(someone else (Kelly?) presenting, very fuzzy, can't be understood)
The context is more around the challenge that we see the marketplace has, and the practical part is that it has to address some sort of mechanism that does exist now.
Not all K-12 districts or states will use a Harcourt (or McGraw Hill) platform for the whole time. They will be swapped in, swapped out, and multiple providers will be integrated at various oints in this. That is opne of the driving points of Common Cartridge. So we are driving at the idea that there needs to be a unique numbering system that all providers can use, so all can share the same metadata, without any loss of integrity.
We actually have response systems based on the platform (...?) They shop for that based on the content provider (... ?) (these sentences are literally gibberish, sorry)
Business Model - Operational Practicality
We all understand the difficulty in this. Whatever the authority is for the standard, they are the creator of the standard, but the issue we see is they are not using their authority to solve a critical piece is diminishing here, and that authority has the oppositunity to make great efficiency, if the standard is actually implemented (this is direct and literal, his sentences are this disjointed).
Our response here is that we collect the standards as published by the state, and add value to them by converting to an actionable state, add the number (the GUID) to them, and distribute them to our clients. What we have seen as an example of this is, we're doing this because the states won't. (sorry, that was a literal sentence, if nonsense). We're doing this, in effect, for the states, on behalf of the states.
In IMS, the folks in the room, are members, and in the same context, are clients, and the government or the branch or whatnot the market in every brach there is a question of how it is going to be funded. (gak)
How do we sustain? The market demands constant delivery of value. There's a free market solution in place at the numbering level. We are incented to innovate because we have substantially impoved that offering. We are jut one element of the metadata movement, and we need to be able to fit into whatever container I clients want us to fit into.
Every discussion comes down to, what is the financial model, how will this activity be funded? We've seen entities rise and fall in the past because of the untenability of the business model. With 1.7 million records, we are tendable. When we see a standard, we say, to us it's a standard that needs to be supported.
Benefits of Uniqueness
Examples of identification ststems: zip code, bar code, ISBN.
We have content-neutral platforms (eg. Blackboard), the content providers (eg. Discovery Education), and the hbrid providers (eg. Curricki, BrainPop). There is a network effect to the ABGUID. The number is a GUID. It's a long complex string. It is really dry. That is the number by which these systems communicate. The lines represent real relationships in the market now. Imagine how this network effect can even grow larger.
We see tangible benefits of the AB GUID network: uniformity (accepted communication system for K12 standards), cost savings (monitoring and digital deployment of standards), revenue opportunities (efficient delivery of products) and partnershiop enabled (common link and technical model). IMS members, also AB clients, have already adopted a small piece of the overall solution, the AD GUID.
It's a proven technical model with a sustainable business model, and people actually use it.
Someone has to pay for a GUID. No pay, no number.
(Question: is there a computer interface so you can download all the numbers from the .org site?)
There is a search for the numbers. We are open to different mthods, but at this moment the .org is intended to be an inventory rather than anything that is really downloadable.
One thing that came up in the break was the dilemmea between the interest o the publishers using the extended version and the interest of the platorms in using the canonical version.
One reason the publishers are so interested in building the extended version of the standards is for remediation, so you can break down the standards into subsets that can be very finely tuned.
It's an interesting scenario, and I readily accept the point, but there's a huge gap, but if you're talking about using the standard for that purpose, what you're talking about is some kind of sequencing mechanism, and to do that, you need a common mode, or an algorithm, by which your going to do the remediation.
You're actually talking about having a common sequencing mechanism for that remediation.
There's other approaches which might be a small step toward that. There's no reason you couldn't have a black-box algorthm that reacts to that. You could still have that. The sytem could also have its own proprietary sequence and search algorithm. In the (CC) architecture showed, that's somewhat enabled. So there's a chance for those advanced learning models (but not in the assessment, and not in the curriculum).
SCORM tried to hardwire sequencing in SCOs, which I don't think was a success. There was Learning design, but it wasn't really adopted by providers. There's a movement to have simpler LD, that can be adopted in CC. The other option we've got, is to use the LTI - that is something that is capable of remediation, but doesn't need to be imposed on the LMS.
With sequencing you're painting yourself into a corner. Sequencing works when you've crafted everything together. But when you're lookinbg at larger bodies, it's about scope and sequences, precursors, tc. It would be nice to know the order, but if they teach out of order, they know about it, but aren't prevented from doing this. You can express these things in a useful way, but the real problem is, it is focused on the LMS, which prevents you from doing anything nuanced within the common cartridge.
In the current cartridge, there is an implied sequencing. There is an attempt to enhance that with the lesson plan. Ultimately it would be nice to have a machine-readable version of the lesson plan where you can describe alternative navigations through the learning material and the instructro can choose from those. Then you have the further alternative of offloading remediation to an external application via the LTI (the attraction of that is you don't need to invent any algorithms within the LMS).
There was a call to revisit Simple Sequencing. We're encouraging LD t support this approach.
When you look at it from a larger scale, it's scope and sequence, not just sequence. We have hierarchy issues thagt are interesgting, But we have other scope and sequencing issues that are interesting. We want to ship not just the book, but also the lesson plans. The hope is that these could be adapted into the assessment system. I want the table of contents encoded, and related to state standards.
We have curriculum models that stipulate what needs to be accomplished by the students. But the job of the college is how to assemble the material to do this. They freqiently break the curriculum into models and present them in very different ways from the way they're presentd in the curriculum model.
There may be more than one opinion about how to order this at the macro model. They can all provide their own.
One way of looking at it, an effort top provide different views of the material in the package. In the Cartridge there is no sequencing. But in the organizations it can be sequenced. But it's independent of the curriculum.
In the resource metadata, we have the curriculum standard metadata, which states:
- the originating authority
- the region to which it's being deployed
- the list from that model that's directly applicable to that resource - we use the URL whereby the platform can derefence the information
Question about how to resolve curriculum references, eg. AB Numbers
Comment: the current state curriulum is loaded with the LMS.
(More discussion on mechanisms - I suggested that it didn't make sense to load the curriculum information into the cartridge, but to rather refer to an external sevrice that maps them).
Comment: let's get some prototypes out, and then we can decide on what's really critical.
My comment: cannot build a requirement that money be paid into the specification (eg., cannot require in the specification that they have to pay AB GUID money in order to map to a curriculum). Because it must be possible to create / distribute cartridges without cost to the provider, so they can be distributed for free.
Comment: we will just have to support multiple providers of these standards. Eg. to deliver content into the UK. ASN has no incentive to go in there.
Views Today: 1 Total: 258.
SUBSCRIBE TO OLDAILY DONATE TO DOWNES.CA
Web - Today's OLDaily
Web - This Week's OLWeekly
OLDaily Email - Subscribe
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
JSON - OLDaily
National Research Council Canada
All My Articles
Stephen's Web and OLDaily
Half an Hour Blog
Google Plus Page
Huffington Post Blog