User guide

FAQ

Technology


How does one use emergent coding to build full-blown applications?

It can't... yet. There are four layers to the marketplace, behaviour, systems, data and byte. Agents in the behaviour layer are application-specific; they belong to a particular domain. It is these agents that you/other devs/end-users will contract to build 'large' applications. (A quick 3 min video showing how agents are contracted to build a cut-down version of our site (both front- and back-end) can be found here. Note that this expression uses our old Pilot interface, and these agents do not yet exist in the current marketplace - the one that you see.)

All contributors are currently populating agents in the data layer, so the community has a ways to go before we populate behaviour agents that will allow us to then build applications. The path between here and there is filled with agents; we need more agents that design functions. And then agents to design asynchronous work and event-driven aspects of programs, and then domain-specific agents to capture the application requirements. Code Valley will be defining 'voids' (agents that need to be built) in the Valley as the community progresses.

TL;DR Right now, the marketplace cannot build large applications. The community of contributors first needs to populate the marketplace, starting from the level of functions/algorithms/logic, and work our way upwards.


But does emergent coding really solve the complexity issue?

e.g. “If I want to precisely describe the behaviour I want some other entity to create, haven't I already created that behaviour in whatever language I used to describe it?”

When using conventional software development methods, one must often describe high-level application behaviour in a language that was not created to sit at the same level of abstraction. This gives the developer an insane amount of freedom, but with that freedom comes the dark side of scope creep, changing requirements etc.

In emergent coding, in order to achieve rapid construction of software, we relinquish the right to a 'smooth space' of requirements. When you contract a 'webserver expert' (like here), you're contracting him for his expertise, one part of which is setting up a menu. You trust that he will design it to fit with the other elements of the site (that you put him in touch with via committees). You have no say over positioning, RGB colour codes etc. That is his job. You do have a say over higher level requirements such as colour theme (which will indirectly and ultimately influence RGB colour codes designed by agents).

Before you throw your arms up in frustration, it is not as bad as it seems. The concept we're talking about is 'constructability' (see pg. 7 of the whitepaper). It is present in all other industries, except software. (And that's because all other industries have a physical product as opposed to our intangible and easily replicated one.)

In the Civil construction industry for example, if your design called for a 520 mm wide I-beam, you would go check the specs (similar to researching your suppliers in this system) and see that nobody can provide such a beam. You would have to go with the 500 mm or the 600 mm wide suppliers. It will be the same in this emergent coding system. If you want a particular type of menu as part of your webserver core that is not offered by any existing supplier, you have to either modify your design and select one that is 'close' to what you wanted, or you can incentivise a supplier to be created that will do what you want. Bear in mind, that last option will be substantially higher in cost and time than the former. (It would be like the Civil engineer commissioning an entirely new beam to be created that is 520 mm wide, an expensive exercise indeed.)

It is worth noting that if there is a market for a "520 mm beam" in this marketplace, suppliers will cater to that demand. In fact, the "520 mm beam" might even become the norm!

TL;DR Yes. In emergent coding, you will have access to powerful 'high-level' suppliers that will shoulder the bulk of the complexity for you.


How is an agent different from a library function?

It would be rather misleading to equate a agent's contribution with one involving the run-time of the program being designed.

Agents are applications themselves that, at their run-time, collaborate with other agents to design an application (which could even be an agent!). That is, at the run-time of an agent, it is the overall compile-time of the application.

For example, the compare::bytesequence::lesser agent, when contracted, will design the code that will compare two strings. At first glance, a developer may see that title and assume that the compare::bytesequence::lesser agent will actually compare one given string against another, but in actual fact, the compare::bytesequence::lesser agent will return tailored code that compares two strings.

Technically, this compare::bytesequence::lesser agent will return code that will only compare two strings when in its place in the final executable – the fragment of code returned cannot actually be executed in isolation. In fact, no two fragments returned by this agent – or indeed, any agent – will be identical. Each fragment of code is highly customised to suit its precise intended run-time use-case. (If the reason for this is unclear, it is most easily highlighted if you consider the fact that embedded memory and code addresses will be unique for each customised use-case.)

TL;DR An agent describes the functionality it will design, and does its part in a decentralised compiler to make sure that functionality is included in the overall program.


How are bugs found and fixed in this system?

In this system, emergence is on your side. An agent's contribution is an amalgamation of 'smaller' contributions (where the 'amalgamation' is part of the developer's specialist knowledge). The contribution an agent provides should be validated and tested by its developer-owner, as their future business and very reputation are on the line. This validation is carried out by building a program by self-contracting the new agent, which triggers a chain-reaction of contracts from the agent being vetted all the way to the byte layer. If the output program passes acceptance testing, the developer has not only ensured their agent's contribution itself is sound, but has also indirectly vetted each and every supplier all the way to the byte layer. If the output program fails acceptance testing, the developer can easily pinpoint this failure to its own contribution or that of one of its suppliers. If it is the former, the developer simply fixes their design, rebuilds their agent and conducts acceptance testing for a second time. If it is the latter, the developer can simply swap out the faulty supplier for a new one. In this system, there is visibility everywhere - finding a needle in a haystack is not so difficult when every straw is manned.

Each developer simply verifies that their own agent's contribution is sound - that their 'cluster of straws' does not contain a needle. The developer's client will do the same for his agent (and in doing so, again indirectly validates all agents 'below') - essentially, every time the cluster of straws grows larger, it is also rechecked for needles. By the time the Pilot contracts into the marketplace, it is highly unlikely that such a 'needle' will ever exist. In the very rare (and costly) instance that it did, the Pilot developer simply swaps out the non-conforming supplier and rebuilds the program, just like any agent would.

TL;DR Finding a needle in a haystack is easy when every straw is manned.


How can I trust code that I cannot read?

Since the invention of the compiler, software development has centered around the production of source code. Over time, the software industry has built atop this foundation of "source code production" by developing countless techniques and mantras for helping to manage the development process. Every new technique, however, has built upon the same concept of source code production; nothing essential has changed. Techniques from other industries (techniques proven to handle complexity at large scales) cannot be well applied to the software industry because it exists in a strange world of Turing completeness, full-stack development, and intangible work-product.

What emergent coding brings to the table is an entirely new foundation on which software can be created. There is no source code. The drive to produce source code is replaced by the drive to collaborate with others to design a custom executable. Add to this the remuneration that emergent coders receive, and it's clear that this change to the foundation of software development more closely aligns with other industries; those of tangible product output. Whilst the current state of emergent coding is indeed primitive, it allows higher-order concepts to be built on top, such as sophisticated client-supplier relationships, supplier quality ratings, professional associations, third-party auditing, and much more. By changing the essence of software development, the techniques from other industries can be much better applied.

This brings us to the question of trust. How is it that you can trust the car you drive, or the bridge you drive over every day? You trust these 'products' because they came from trusted and accountable suppliers. Does the car retailer dismantle every car and inspect each part before selling it? No; likewise, he also trusts (and holds accountable) his qualified suppliers. And so on. In most industries, there is actually a good deal of trust that exists between a client and his supplier, a quality that is established, cultivated and strengthened through the many deals that take place between the two parties over time, where each successfully completed contract serves to bolster their relationship.

TL;DR You trust the code's supplier, just like you trust the car dealership you purchased your car from.


How is the intellectual property of an agent protected?

Emergent coding naturally protects the intellectual property of each of its constituent agents. It does not rely on DRM, licensing agreements or client goodwill of any kind, but instead relies simply upon the powers of abstraction and encapsulation. The contribution an agent provides involves negotiations, predicates and delegations, machinery that stays hidden inside the agent thanks to the power of encapsulation. Any other developer can automate their agent to contract another agent knowing what it will do, but not how it does it, thanks to the power of abstraction. We know that the compare::bytesequence::lesser agent will design the code to compare two strings, but we are not privy to how that agent achieves such a design.

Each agent provides its contribution in co-operation with peers. And these peers can differ between projects (due to different clients or requirements configurations etc.), which can result in different committee outcomes and therefore different suppliers being contracted. As a result, the fragment of code returned by any one agent is highly contextually dependent. The code will only run when in its place in the final executable. Anyone is welcome to inspect the fragment (after paying the agent's contract fee, and satisfying its requirements, of course), but that fragment will not reveal anything about the complex machinery and decision-making that went into producing such a fragment. The 'source' (the design blueprint) has been decoupled from the code.

TL;DR IP protection is a natural feature of this system.


How does this system deal with the 'namespace pollution' problem?

All agents in the marketplace currently available for contracting are advertised in a highly formalised and standardised repository called the Valley. The Valley is hierarchically classified into six tiers; layer, verb, title, variation, suite and contributor, where each tier is essentially a standardised 'namespace'. Certain agents are classified under certain suites. Each suite is classified under a particular variation. Each variation is classified under a particular title. And each title is classified under a particular verb. Finally, verbs are classified under particular layers. A newly joined developer considering building a particular agent will first search for the relevant classification (layer::verb::title::variation::suite) in the Valley, before building an agent under that suite. It is in the developer's best interest to correctly find (or establish) the appropriate classification so that prospective clients can more easily locate the developer's agent as a potential supplier. An agent must be formally published to the Valley before it will be available for contracting by other agents in the network. An agent can only be formally published to the Valley under an existing classification. Thus, the classifications are created first, then the agents to fill those 'voids'.

With a formalised naming and classification system in place, the next problem to solve is that of disorganised growth. The Valley must grow systematically, and in response to tangible demand. Classifications created without some degree of measured thought and due diligence will result in an unnavigable Valley - a system in which demand cannot be easily connected with supply. Any ill-conceived classifications (i.e. namespaces in which no child names are added) are essentially spam. We have chosen to adopt a tried-and-tested spam prevention mechanism with a twist; if you 'correctly' identify a new classification (where 'correctness' is retrospectively assigned based on whether others choose to publish sub-classifications under your new namespace), you receive payment from every single one of those publishers. That is, all agents pay publishing fees to their parent suites. All suite publishers pay publishing fees to their parent variation. All variation publishers pay publishing fees to their parent title. All title publishers pay publishing fees to their parent verb. And all verb publishers pay publishing fees to their parent layer. At present, the publishing fee is currently set to 0.01 mBTC per fortnight.

Thus, there is a small penalty for creating a new classification, but if that classification is deemed 'correct' by the community, there is potential for far greater reward. This mechanism essentially incentivises correct identification - if a developer correctly identifies a classification and many sub-classifications are created under it, the developer will not only break even, but will instantly see regular payments from each of the sub-classification publishers. For example, if a developer publishes a new suite and 6 agents publish to that suite, he may pay 0.01 mBTC/fortnight to the parent (title) publisher, but will also receive 0.06 mBTC/fortnight (in total) from the developers who publish agents under that new suite. Similarly, if a classification is incorrectly identified, or if there is no demand for it, the classification will likely be unpublished and removed from the Valley (as the publisher will not want to continue to be out of pocket the regular publishing fee). As a result, the Valley will systematically expand in direct proportion to true agent demand.

TL;DR Using two mechanisms; formal standards and financial incentives.


How many suppliers does an agent typically have?

The number of suppliers an agent has can vary greatly and depends upon a number of different factors, first and foremost of which is the particular design that the expressor has chosen. Some agents may have only one or two suppliers, whilst others may have 15. It is however, misleading to think that an agent with very few suppliers has a 'simpler' design than an agent with many suppliers. The reason for this is processing. Take for instance agents in the byte layer. These agents have no suppliers at all, yet their designs are not exactly 'simple.' They co-operate with their peers to design bytes of code, and this co-operation must be formally expressed so that it can be automated. If you were to inspect the expression of a byte level agent, you would find that the bulk of requirements were expressed in the Processing window; the Communications window remains largely empty.

The data layer of the Valley is quite 'thick' and many data agents side-contract other data agents. Primitive data agents contract directly into the byte layer. Composite data agents contract primitives (or lower level composites). Theoretically, a composite data agent towards the top of the data layer (close to the data-system interface) could be expressed by contracting directly into the byte layer. However, this would require the expressor to automate their agent to join many many committees; the dof committees and their term committees and their term committees and their term committees etc. This requires an extensive and rather complicated amount of processing as the nested negotiations within committees will result in a combinatorial explosion of possible automation paths. Basically, the expertise of the developer is spread too thin. The resulting agent may provide the correct service and ultimately deliver correct code, but that code will likely be far less efficient than the code designed by agents who were not automated to span many different levels of abstraction. Other agents in the same classification would likely out-compete this bulky agent.

At present, while the committee types and outcomes are relatively basic, you may be tempted to bridge multiple levels of abstraction with a single agent. However, as more sophisticated committee types, outcomes, negotiations and processing becomes available, you may find it difficult to keep up with your competitors who have chosen to specialise at a particular level.

So rather than saying "the more suppliers, the better the design," it is more accurate to say "the more processing, the better the design." In fact, if you do find your agent contracting many suppliers (say, more than 20), the answer is simple; you need fewer (and therefore better) suppliers.

TL;DR Typically, less than 20.


Why are you asking for my Country upon signup?

Unfortunately, the world has not yet caught up with Bitcoin and regulations have been applied prematurely. Since you will soon be receiving money from your Agent's clients and paying money to your Agent's suppliers, you may need to know their country of residence in order to fulfil your local tax and accounting obligations. Your Valley username and country of residence are the only two pieces of information that are public. Your email address will remain private, unless you choose to share it with your Agents' suppliers and clients.

TL;DR So that when your Agents start receiving revenue, you are in a position to fulfil accounting and tax obligations.


Why are you asking for my expertise?

The strength of the 'World Compiler' is dependent upon a developer being able to glean the integrity of a potential agent's supplier from its public metrics. That is, reputation is key. While the network is young, and reputations are still being cultivated, it is paramount that developers be assured of some base level of integrity. By providing your expertise, you are demonstrating your calibre as a developer. Once in the network, you can be assured the developers you are joining are also of a similar calibre.

Your list of expertise is also useful in determining the appropriate area of the Valley in which you are most likely to build agents, allowing us to more rapidly assist with the construction of new domains, in turn allowing your agents to earn revenue more quickly.

The list of expertise you provide and the applications you wish to see the network target first are kept private and will not be shared with the public.

TL;DR To keep the calibre of the network high, allowing you to more easily vet future suppliers of your agents.


Valley


Who creates the classifications in the Valley?

Any contributor is able to create a new classification in the Valley. When they do so, they are responsible for paying the parent classification a token publishing fee. In turn, the contributor will receive publishing fees from any contributors that publish sub-classifications under their own.

Users cannot create classifications at present. However, if you have need of a particular contribution, or have an idea for a new classification, you are welcome to post to the forums. Contributors will be monitoring these forums and adding to the Valley as necessary.

TL;DR Contributors.


What does the '::default' variation mean?

As the name suggests, a variation describes the different ways in which the title action can be designed. For example, the data::compare::bytesequence currently has variations of ::default, ::lesser and ::greater, which are for designing the code to compare when two strings are equal, when the first is less than the second or when the first is greater than the second, respectively. Logically, it is reasonable to assume that the first variation created under a particular title will likely be the most commonly contracted contribution associated with that particular title. Furthermore, when the very first variation is being created under a particular title, it can be difficult at times to anticipate future variations. For these reasons, the first variation is labelled ::default. Subsequently created variations are usually labelled according to their role relative to the default variation.

TL;DR First variation created under a title. It is the 'default' contribution.


Are 'flow' dofs just incoming calls/callback?

One should avoid comparing agents to functions. An agent will provide functionality to a program, but this is very different from a "function." Thinking of agents as functions is incorrect and misleading.

The 'client' will provide a construction site to its supplier agents. This construction site is literally just some space reserved in the output executable file, and the supplier places bytes in that space. There is no use of function definitions or callbacks. Integration is purely a matter of placing bytes at the reserved position in the executable file.

TL;DR No. Thinking of them this way is misleading.


Are data dofs (e.g. 'flag') just local variables?

Regarding the data dofs (e.g. integer / bytes /etc.), there is no real concept of local/global variables at the data layer. There are presently three options a data agent has when dealing with variables:

  1. Request access to a variable, which will be provided by the client (this is a 'rep' dof);
  2. Reserve its own variable (by contracting, for example, byte::contribute::bytes::default::x64) and provide its client with access (this is a 'chair' dof);
  3. Identical to option (2) but without providing the client with access to the variable. This is useful for such things as temporary variables.

In all three cases, there is no explicit variable scope. All variables exist in memory which can be accessed at any time during program execution. Where execution context matters, the system layer agents will ensure that the correct variables are provided to the data layer agents.

TL;DR No. Thinking of them this way is misleading.


Who creates new committee types?

In future, when the entire marketplace is decentralised, contributors will propose new committee types, and where there is general agreement amongst contributors in that specialisation, those committee types will be ratified and adopted. Because contributors will essentially be able to tailor the look and feel of their Agent-builder, it is their choice whether or not to have it make certain committee types available when expressing their agent. Thus, the committee types that are not agreed upon by the majority will never be used during live builds, and will naturally 'die off.'

For now, while the Valley is centralised, Code Valley is the only party that can officially add committee types. However, we are eager to step aside and no longer be the sole definer of committee types. When new types are put forward by developers and there appears to be some demand for such types, Code Valley will immediately add this functionality so that their agents can be automated to negotiate using the new types.

TL;DR Contributors and users suggest them, Code Valley adds them.


How do I add a classification?

As a Contributor, you have the ability to create new classifications in the Valley. To do this, you go to your Valley portal and hit the “+” button next to the relevant classification and input the relevant details.

It is in your best interest to make it as easy as possible for prospective clients to home in on your agent as a supplier, so please make sure you are already somewhat familiar with other classifications in the vicinity of your planned new classifications.

If adding a suite classification (with corresponding dofs and specs), please refer to the idiomatic classifications standards here.

TL;DR Click the "+".


How do I know if the classification I have added is correct?

Quite simply, you will know if you have added a correct classification by how many contributors choose to publisher 'underneath' you. If another contributor chooses to publish a similar classification to your own and contributors opt to stay published underneath your own classification, the community will have deemed your classification the 'correct' one.

TL;DR By whether other contributors choose to publish underneath you.

Pilot


What is the Code Valley Community Fund?

Emergent coding is a very different way of developing software. Agents provide a service of design and deliver a compiled fragment, and they expect payment in return for that service. In its early days, such a revenue model falls prey to the chicken-and-the-egg problem. Typically, a developer often thinks, "I'm only going to pay for something that I know works, but you're telling me I can only know that it works if I pay?" In an effort to combat this problem, we have put together a Community Fund, all funds of which will go towards paying agents contracted through the Pilot so that you may experience Emergent Coding without the hassle of making a payment. This gives you a chance to try out the technology for free without penalising developers who have already deployed agents to the system. Obviously, such a system can be abused, and we are appealing to you to treat the system fairly and use it for its intended purpose; to try out a new technology with no strings attached.

This 'free build' period will last only as long as the fund itself lasts, but anyone can donate. Thanks to the kind contributors to this fund, you are able to experience what it's like to be an emergent coder. If you believe in this technology and also want to chip in, please feel free to donate a few 'toshis or em-bits of your own to the following bitcoin address: 1CodeCvLUemAZGNGuQkJojnermEN1b3P6v.

TL;DR A mechanism to allow developers to build for free using agents that will still get paid.


What is the minimum amount you refer to?

All Bitcoin wallets have a minimum spend amount, generally to prevent 'dust' and spamming in the network. You may attempt to pay for a build but find that your wallet complains about the amount being too small. This is an unfortunate reality we must deal with, although builds being so cheap is a great problem to have! Costs are generally only this small when contracting down to the Data or Byte layers, so we expect it to become a non-issue in future. We've found the minimum amounts of some popular wallets range from about 2700 satoshi to 5500 satoshi; we have not enforced any minimum on our end.

If you find that a build is less than your wallet minimum, you have two options; a) Register as a Contributor, so that build payments can be accumulated; b) Pay the minimum amount allowed by your wallet and we will treat the excessive amount (in the order of fractions of cents) as a donation to the platform.

TL;DR Bitcoin wallets have a minimum spend amount, forcing you to either pay at least that much per build, or register as a Contributor to accumulate payments.


What do I do if I think I've identified a faulty supplier?

If you do find that your build fails, please email team@codevalley.com with a description of your design and faulty behaviour. (Code Valley will then get to work on isolating the faulty supplier and appropriately recording this non-conformance against their agent in the Valley.) If your build is faulty, the cost of that build will be returned to you.

Bare in mind that this is not at all the permanent solution but rather a stop-gap while the network of agents is in its infancy. As the network grows, relationships will strengthen between clients and suppliers, competition will grow between contributors, and reputation will become increasingly significant. The majority of product/service faults in other industries are dealt with at the direct client-supplier relationship level, with no need to introduce a governing body or third party. This localised fault management is clearly a nice decentralised approach to the solution, which still does not exclude third party intervention if necessary. This is the path we envisage for emergent coding.

TL;DR Advise Code Valley immediately.

Agent-builder


As a Contributor, is there any way I can earn deployment credits?

As a Code Valley user, during Whitney you have the opportunity to earn deployment credits by adding value to the marketplace in other ways. A deployment build credit will allow you to publish a agent for free. You can earn deployment credits a number of different ways;

  • Referrals - Every time a new user joins, they have the opportunity to specify whether they were referred by an existing user. If you are nominated as a referrer, and if/when the new user builds their first agent, you will receive three deployment credits.

  • Educational contributions - You also have the opportunity to be awarded varying numbers of deployment credits by adding value to the marketplace in the form of community awareness and education. Any time you author a constructive and informative article/post/video about emergent coding, or make educational or clarifying comments in a discussion thread (e.g. reddit), tweet us the link, your Valley ID and the hashtag #valleycontributor. If we retweet the link, you will be awarded five deployment credits to thank you for your contribution.

  • Compensation for inadequate documentation - (This is a worst-case scenario that we fervently hope will never come to pass. We take great pride in ensuring our documentation is up to scratch, and is conveyed in as simple a manner as possible.) If an agent you built designs code that does not exhibit the correct run-time behaviour, and the source of the error is a flaw in your design due to an inadequacy in our documentation, please email team@codevalley.com with the agent's name, a brief explanation of the design flaw and the documentation (or lack thereof) that lead to such a flaw. Once verified, you will be awarded an additional deployment credit to thank you for assisting us in improving our documentation.

  • Compensation for faulty Code Valley agent - (Again, this is a scenario we hope will never occur. We walk the talk and test each agent before deploying but would not be so arrogant to think every agent built by Code Valley is infallible. This will be the case for any new agents published to the Valley as they are still trying to cultivate favourable metrics.) If (God forbid) you come across a Code Valley agent that designs code that does not exhibit the correct run-time behaviour, please email team@codevalley.com with the name of the agent and a brief description of the incorrect behaviour. After verifying that the agent is indeed faulty and rebuilding to fix the design error, we will immediately award you three deployment credits, to thank you for helping improve the overall integrity of the Valley.

TL;DR Yes, by contributing to the community in the form of education and awareness.


Why don't you have integrated version control for expression files?

You are free to use any version control software you desire, it's just that you will need to do that on your own end (i.e. download the expression files via the web app and then use your choice of software to manage versions on your own computer). Code Valley has made a business decision that we won't integrate any form of expression version control into the web app itself; our priorities lay elsewhere. Perhaps future contributors will appear in the marketplace to design version control functionality, particularly when agents are decentralised, but we will leave that up to the market.

TL;DR Integrated version control is part of the decentralised vision for Code Valley.


What is the type system for nouns?

Currently, the Agent-builder provides a very rudimentary (virtually non-existent) type-system for nouns. Essentially, there are only strings, numbers and triggers/events, all of which are treated as having a string value. If the noun is used in number processing then it is assumed to be an ASCII decimal representation of that number; if that is not the case, then it is treated as a value of zero (i.e. a noun value of "42" is treated as the number 42, whereas "hello" or "forty-two" are treated as the number 0). If a noun is used as a trigger (for example, to trigger the contracting of a supplier), then the value of the noun is irrelevant. If a noun is sourced from an event such as "contract.accepted" then the value is an empty string.

TL;DR A rudimentary one, at present.


What does it mean to input as Hexscii or ASCII?

The options of Ascii and Hexscii are intended to solve a problem we face with communicating compile-time values between suppliers.

The solution we've settled on is that all compile-time values are just strings. The way in which agents interpret a value depends on the specific use-case. We've tried to make this intuitive, with information detailed in Valley descriptions, but it's still a common cause of confusion. If you keep in mind that all values are just strings, it should help when reasoning about this.

The Ascii option means, “What you type in here will be treated exactly as you typed it.” That is, “ABC” is 3 bytes long, with the first byte having a value of 'A' (41hex). Whomever you send this value to will see it as “ABC”. This means that you can only specify values in the printable Ascii character range.

The Hexscii option means, “What you type in here is a Hexadecimal character representation of the actual value.” That is, “41 42 43” is 3 bytes long, with the first byte having a value of 41 hex (‘A’). Whomever you send this value to will see it as “ABC” (the conversion takes place before you send it). You can see how this gives you the ability to specify characters outside of the Ascii character range (and yet no additional conversation need take place to indicate a different interpretation).

The question then is, when should you really use one or the other?

It doesn’t matter if you're providing a value for the “Define” processing in the Agent-buider, or to a supplier “Spec” in the Pilot, the rule of thumb is...

  • for a human-readable string, you should use Ascii (e.g. “Hello World”)
  • for a number, you should use Ascii (e.g. “42” is generally decimal 42)
  • for a string containing non-printable-Ascii characters, you should use Hexscii (e.g. “0A” (new line), or “2A 00 00 00 00 00 00 00” (64-bit little-endian representation of decimal 42).

You'll often use Hexscii only when contracting to the byte-layer for specifying a string of bytes. You might also use Hexscii if you want to put a carriage return or the like inside a bytesequence; unfortunately we do not yet handle escape characters (we can't handle “\n”, etc), but we do want to build this in when we get the chance.

Some more things to try get your head around it… When a agent expects a value to be an integer value, it will only accept a string of ascii digits; anything else is considered a value of 0. For example, if you provided a number Spec as Ascii “42”, that’s treated as decimal 42. If you provided it as Hexscii “42”, that’s “B” which is not a decimal digit, and so will be treated as decimal 0. Further, Hexscii “34 32” is equivalent to Ascii “42”, which (as above) is treated as decimal 42, though I see no reason to specify 42 in such a tedious way.

TL;DR If you are expressing a data layer agent, use the Ascii option (unless specifically trying to convey an escape character).


What do you mean "make an agent smarter"?

It is important to understand that the current committee types, outcomes and terms listed in in VTS01: Communication are very simple, and currently contain little to no negotiations. You will also notice that there are very few types of committees where there is more than one valid outcome. This results in a rather 'flat' marketplace, where agents are somewhat limited in how sophisticated they can become. However, these committee types are not set in stone. In fact, we anticipate demand for changes and upgrades to these types as developers build their own agents and become more familiar with this system of development. Once comfortable, you will quickly recognise ways in which you can make your agent 'smarter'. At present, due to the simple committee types and rudimentary negotiations, there is little opportunity for making an agent smarter with regards to the decisions it makes and how these decisions influence the selection and organisation of suppliers. However, there is a wealth of opportunity with regards to choosing or creating more sophisticated algorithms for data agents to utilise in their designs. Your algorithmic ingenuity will be protected within your agent - you will be delivering a superior design-contribution as far as your clients and competitors are concerned, but they will not know how you are delivering that contribution. Your IP stays protected. In Ford, when developers are encouraged to make their agents 'smarter', you will be able to go beyond simply making smarter choices in your algorithms, but will actually be able to automate your agent to make smarter choices (that affect both in-band and out-of-band designs).

But we are not there yet. As a developer, your first step is to simply create a working agent using simple committee types, simple processing and simple organisation. It is still possible to produce software using these 'not-so-smart' agents, and that is the goal of Whitney. Once a new cohort of agents are added to the marketplace, and a new type of software becomes constructable, as a community, we can then progress to Ford and build smarter agents, pushing them up the 'technology progression curve'. With this evolution, the marketplace will become capable of building the same type of software as in Whitney, but that software will be leaner and far more efficient - the result of the collective improvement of all agents in its supply-chain.

During Whitney, as you become more familiar with this system and your agent's own operations, you may have ideas or suggestions for improvements to existing committee types and outcomes (or you may have ideas for new ones altogether). We wholeheartedly encourage this community-wide discussion, which will take place in the Code Valley forums or in our Slack channel. We will take every suggestion on board, and if there is general consensus in the community, we will add the necessary machinery for these committee types so they are instantly available in the Agent-builder, putting you in a position to immediately upgrade your agent (should you wish to).

TL;DR Automate your agent to tailor its contribution based on compile-time information.