Neither Visa nor Mastercard really implement ISO 8583 a standardized way. Which means they each issue many thousands of pages of documentation covering not only which of the standard fields they use and how, but also how they cram their proprietary data into the messages. Most card management/issuance platforms do a decent job of abstracting this away though.
Transition to ISO 20022 would be a positive improvement, but I don't think it will ever meet the required ROI threshold (globally) for that to happen.
Can attest, having searched through literally thousands of pages of documentation in an attempt to attribute the payment processing switch vendor when analysing the ATM jackpotting malware ‘fast cash for Linux’[1]. The best I could do was determine the currency used for the fraudulent transactions, which may imply the country of the target financial institution.
Would be curious if anyone else has further insights.
The large card networks have so many proprietary behaviors and extensions that I really doubt whether any common standard would even make sense at this point.
And if you look at how "modern" ISO 8583 is evolving, almost all changes and extensions are already happening in TLV-like subfields (where a new field unexpectedly appearing doesn't make existing parsers explode spectacularly), and the top-level structure is essentially irrelevant.
Of course, it's a significant hurdle to newcomers to get familiar with that outer layer, but I don't get the sense that catering to these is a particular focus by either network. ISO 8583 is also a great moat (one of many in the industry, really) for existing processors, which have no buy-in to switch to a new standard and the networks need to at least somewhat keep happy.
I thought that chip-in EMV was bad until I saw some of the stuff coming out of Discover cards for contactless EMV. Buying a test card set from somewhere like B2 Systems was very beneficial even just integrating an EMV reader from a hardware device to a payment processor.
In this world and age of AI, having this kind of inside knowledge that is scattered, usually behind paywall and nda, and always to be updated, is a real advantage.
Because no LLM will be able to replace you for quite a while.
Having been involved in several ISO8583 implementations/integrations, it's really quite wild how different each one was in both structure and required content from one another.
The type of protocol (message type, bitmap to define fields, followed by a set of fixed and variable length values) is pretty normal for the time it was developed in. Many low level things are basically packed C-structs with this type of protocol. It comes with some pitfalls on the receiver side to be careful validating dynamic field length and refusing to read past end of message or allocate an infinite buffer. But all of those are well understood by now.
What I find baffling is that this "standard" does not specify how to encode the fields or even the message type. Pick anything, binary, ASCII, BCD, EBCDIC. That doesn't work as a standard, every implementation could send you nearly any random set of bytes with no way for the receiver to make sense of them.
> Many low level things are basically packed C-structs with this type of protocol.
Not really: C structs notably don't have variable-length fields, but ISO 8583 very much does.
To add insult to injury, it does not offer a standard way to determine field lengths. This means that in order to ignore a given field, you'll need to be able to parse it (at least at the highest level)!
Even ASN.1, not exactly the easiest format to deal with, is one step up from that (in a TLV structure, you can always skip unknown types by just skipping "length" bytes).
A bitmap to define field presence doesn’t seem so offensive, as far as serialization formats go. FlatBuffers[1] use a list of offsets instead, but that’s in the context of expecting to see many identically-shaped records. One could argue that Cap’n Proto with zero-packing[2] amounts to the same thing if you squint, just with the bitmap smeared all over the message.
I mean, this specific thing sounds like it should have been a fatter but much more unexciting TLV affair instead. But given it’s from 1987, it’d probably have ended up as ASN.1 BER in that case (ETA: ah, and for extensions, it mostly did, what fun), instead of a simpler design like Protobufs or MessagePack/CBOR, so maybe the bitmaps are a blessing in disguise.
Telegram's "TL" serialization, that's part of its network protocol, also uses a bitmap for optional fields. It's an okay protocol overall. The only problem is that the official documentation[1] was written by Nikolay Durov who is a mathematician. He just loves to overgeneralize everything to a ridiculous degree and spend two screens worth of outstandingly obtuse text to say what amounts to, for example, "some types have type IDs before them and some don't because the type is obvious form the schema".
I'd trade the field layer of ISO 8583 for some ASN.1 any day!
Luckily, there's a bit of everything in the archeological site that is ISO 8583, and field 55 (where EMV chip data goes, and EMV itself is quite ASN.1-heavy, presumably for historical reasons) and some others in fact contain something very close to it :)
As far as I'm concerned, we solved binary formats with ASN.1 and its various encodings. Everything afterwards has been NIH, ignorance, and square wheel reinvention.
I think ASN.1 is good but there are some problems with it. I think that it should not need separate type numbers for the different ASCII-based string types and separate type numbers for the different ISO-2022-based string types; you can use one number for ASCII and one number for ISO-2022; the restrictions will be a part of the schema and should not be a part of the BER/DER. Furthermore, I think they have too many date/time types. Also, some details of the other types (e.g. the real numbers type) are more messy than they should be if they are designed better.
I had made up the "ASN.1X", which includes some additional types such as: key/value list, TRON string, PC string, BCD string, Morse string, reference, out-of-band; and deprecates some types (such as OID-IRI and some of the date/time types; the many different ASCII-based and ISO-2022-based types are kept because a schema might have different uses for them in a SEQUENCE OF or SET OF or a structure with optional fields (even though, if I was designing it from the start, I would have not had many different types like that)), and adds a few further restrictions (e.g. it must be possible to determine the presence or absence of optional fields without looking ahead), as well as some schema types (e.g. OBJECT IDENTIFIER RELATIVE TO). (X.509 does not violate these restrictions, as far as I know.)
I also have idea relating to a new OID arc that will not require registration (there are already some, but this idea has some differences in its working including a better structure with the working of OID); I can make (and had partially made) the document of the initial proposal of how it could work, but it should need to be managed by ITU or ISO. (These are based on timestamps and various kind of other identifiers, that may already be registered at a specific time, even if they are not permanent the OIDs will be permanent due to the timestamps. It also includes some features such as automatic delegation for some types.)
There are different serializations formats of ASN.1 data; I think DER is best and that CER, JER, etc are no good. I also invented a text-based format, which can be converted to DER (it is not really meant to be used in other programs, since it is more complicated than parsing DER, so using a separate program to convert to DER will be better in order to avoid adding such a complexity into programs that do not need them), and I wrote a program that implements that.
ASN.1 DER, BER, or OER? Implicit and optional can really break compat in surprising ways. Then there are the machine unfriendly roster of available types. XDR was more tuned for that.
Finally free tooling doesn't really exist. The connection to the OSI model also didn't help.
Or XER or JER! One of the brilliant things about ASN.1 is that it decouples the data model from the serialization format. Of the major successor systems, only protobuf does something similar, and the text proto format barely counts.
> Implicit and optional can really break compat in surprising ways
Any implementation of any spec can be broken. You could argue that the spec should be simpler and require, e.g., explicit tagging everywhere, like protobuf. Sure. But the complexity enables efficiencies, and it's sometimes worth making a foundational library a bit more complex to enable simplifications and optimizations throughout the ecosystem.
> Then there are the machine unfriendly roster of available type
Protobuf's variable-length integers are machine-friendly now? :-) We can always come up with better encoding rules without changing the fundamental data structures.
> Finally free tooling doesn't really exist.
What do you mean? You use ASN.1 to talk to every server talking SNMP, LDAP, or the X.509 PKI. Every programming environment has a way to talk about ASN.1.
> The connection to the OSI model also didn't help.
Agreed. The legacy string types aren't great either. You can, of course, do ASN.1 better. No technology is perfect. But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple and shunting complexity and schema-ness that belongs in a universal foundation layer into every single application using the new "simple" thing.
My opinion is that DER is better. (However, DER is a restricted form of BER; any DER file is also a valid BER file, but has certain requirements of the encoding, so that it is a canonical form (the other canonical form is CER, but my opinion is DER is better).)
> Every programming environment has a way to talk about ASN.1.
Not all implementations are well-designed, though; I can see many implementations of ASN.1 that are not well-designed. I made up my own, to hope to be better, but we will see what is (hopefully) better.
> But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple
I agree with this, and it is important. This is my intention when I was designing stuff, to not be too simple nor too complicated; most stuff I see tends to be either too complicated or too simple, so I try to make stuff better than that.
> ASN.1 compiler and compile-time support functions
> The ASN.1 compiler takes an ASN.1 module as input and generates a corresponding Erlang module, which can encode and decode the specified data types. Alternatively, the compiler takes a specification module specifying all input modules, and generates a module with encode/decode functions.
A lot of payments chatter on here recently and patio11 throwing out some great content as well. May I ask where this pretty visual explanation website was 25 years ago? ;) Oh the woes of programming ISO8583 as I see another commented on EBCDIC which adds in a whole other level of mind numbing when passing between the endians. It was a fun experience however back in the early 2000s when I worked in isolation with Discover card to get the GUID field added to the ISO8583 specification.
We are living in changing times on many fronts and the worlds financial systems is one of those new battlefields. Many are ignorant as to what is occurring but with big tech owning their own payments ecosystems this should be insight for others not aware as we are absolutely certain to see more following their lead. Some of those others following are entire countries, they are just a bigger business after all, as it is already happening for those aware and a small select few are doing i.t.
I learned a lot more about this discussing the PCI/DSS [0] regulation
framework here [1]. It's about to change to a new 4.0 in 2025 which
means that to use or run any payments system you'll have to meet ever
more stringent regulation. This is going to start applying to other
pseudo currencies (in game value tokens etc) if they exceed certain
value and scale. At present Visa and Mastercard have a big stake in
defining this (capturing the regulator).
Interestingly local real (non-digital) currencies like the Brixton
Pound [2] and other local paper scrip seem to escape this, which seems
a boost for paper technologies.
- PCI DSS 4.0 is already in place and to be retired on December 31, 2024. PCI DSS 4.0.1 is the replacement and I place already.
- PCI DSS 4.0.1 and game tokens have nothing in common. The applicability of PCI DSS requirements are decided by card brands, aka Visa, Mastercard, etc. And it is the acquirers to enforce on the third party service providers to enforce the standard. Standard itself has no power on anyone.
- Mastercard and Visa have high stakes because technically they are the regulators. EMV Co, the core of the payments was built by Europay (later acquired by Mastercard), Mastercard and Visa. The M and V of it are managing the chip on cards, online payments and much more. PCI SSC is merely a supervisory authority who sets the standard, the process of assessments and investigations on behalf of these brands.
Side note: While the other card brands accept PCI DSS as an entry level requirement, they do not have as much saying on it as Mastercard and Visa.
PCI-DSS is an industry standard, not a law. If you don't think it should apply to your domain, complain to your legislators/regulators, not the authors of PCI-DSS or the payment industry covered by it!
> Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this
And so do countless other digital (non-real?) payment systems across the globe. That's not to say that there aren't any other security regulations, but they're also most certainly not in PCI scope.
Arguably, the original sin of the card payments industry in particular, and US American banking in general, is treating account numbers as bearer tokens, i.e. secret information; if you don't do that, it turns out that a lot of things become much easier when it comes to security. (The industry has successfully transitioned of that way of doing things for card-present payments, but for card-absent, i.e. online, card transactions, the efforts weren't nearly as successful yet.)
Oh, this format was fun. You could see history unfold when parsing it. The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD. But one field contained XML. And the XML had an embedded JSON. The inner format matched the fashion trend of the year when someone had to extend the message with extra data. :-)
> The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD.
The "great" thing about most ISO 8583 implementations I've worked with (all mutually incompatible at the basic syntactic level!) is that they usually freely mix EBCDIC, ASCII, BCD, UTF-8, and hexadecimal encoding across fields.
It has been fun seeing all the different ways companies have come up with to work around the limitations of ISO 8583. One I’ve been seeing a lot lately is making an API call before/after the ISO message (with non-PCI data) to confer additional information outside of the payment transaction. Definitely speeds up time to market, but opens up a whole new array of failure modes to deal with.
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
As others say, it's not a matter of Visa VS Amex. I use both a Mastercard and a Visa with a neobank in Europe, and I get instant notifications. Must be more something to do with the bank (US banking is famously so behind, but I also see days-long delays with traditional european banks).
Even more magical: when sending money to someone I'm physically present with, I hear their notification before the "money sent" animation finished on my own phone
It's probably less about layers and more about the different number of stakeholders.
Visa/MC transactions go through at least four systems (merchant, merchant payment service provider/acquirer, Visa/MC, issuer processor); Amex is merchant acquirer and card issuer in one, so there is no network and accordingly at most three parties involved (merchant, merchant PSP, Amex).
That said, some of my Visa/MC cards have very snappy notifications too. In principle, the issuer or issuer processor will know about the transaction outcome even before the merchant (they're the one responding, after all), and I definitely have some cards where I get the notification at the same time and sometimes before the POS displays "approved".
It's a per country thing. Card txns in the US are bananas arcane byzantine nightmares. (Source: worked at Canadas largest bank on txn processing software).
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Some smaller banks upload available balances to processors and perform clearing later in a back office only. They just don't have a hook to link a notification and send it only after the actual settlement.
>> Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Well thats sort of the thing...with Visa and MC, there is an extra layer or two of the bank or Fidelity Information Services. With Amex, they own the full stack end to end.
Your card limit gets checked on every transaction. There doesn't seem to be a technical reason why information flow back to me should be limited in any way. If the extra layer fails to work the transaction fails to pass.
I remember being charged after a while when paying for bus/metro tickets in some places, I think those machines process transactions by batches or something.
> Your card limit gets checked on every transaction.
Nope. The merchant can choose the level of verification - in some cases, like copying the card with an imprinter [1] or running phone transactions (yes, that is possible - it's called MOTO [2]), it's obviously impossible to check card limits.
Downside of CNP transactions is, the merchant is fully liable for anything from fraud over chargebacks to exceeding limits.
And then you got card-present transactions but the network connectivity is down for whatever reasons... been a while since I messed with that, but at least for German cards you could configure the terminal to store the account details for later submission when connectivity was restored.
Great article that exposes why ISO20022 will replace 8583 over time, especially in areas not dominated by the M/V monopoly networks.
Credit cards, with all their nonsense about cash backs and rewards can be imnplemented in the new payment systems with banks offering line of credit accounts that are part of the appropriate "national payment system", like UPI, PromptPay, Osko/PayID, FedNow etc.
Instant settlement between accounts, low cost fixed price txns etc.
Fun anecdote: Thailand's entire banking network (including regular wire transfer) was implemented with ISO 8583 (!). Part of the AnyID master plan (later renamed to PromptPay) was to replace the country's (ab)usage of ISO 8583 with ISO 20022. The Ministry of Finance hired a UK-based Vocalink to build this converter, among with other systems MoF hired them for. (AFAIK, the entire stack was written in Erlang.)
Fun times reviewing the masking logic of credit card data spewed out in system logs, in base64-encoded (or god forbid, EBCDIC-encoded base64-encoded) ISO 8583.
Neither Visa nor Mastercard really implement ISO 8583 a standardized way. Which means they each issue many thousands of pages of documentation covering not only which of the standard fields they use and how, but also how they cram their proprietary data into the messages. Most card management/issuance platforms do a decent job of abstracting this away though.
Transition to ISO 20022 would be a positive improvement, but I don't think it will ever meet the required ROI threshold (globally) for that to happen.
Can attest, having searched through literally thousands of pages of documentation in an attempt to attribute the payment processing switch vendor when analysing the ATM jackpotting malware ‘fast cash for Linux’[1]. The best I could do was determine the currency used for the fraudulent transactions, which may imply the country of the target financial institution.
Would be curious if anyone else has further insights.
[1] https://haxrob.net/fastcash-for-linux/
The large card networks have so many proprietary behaviors and extensions that I really doubt whether any common standard would even make sense at this point.
And if you look at how "modern" ISO 8583 is evolving, almost all changes and extensions are already happening in TLV-like subfields (where a new field unexpectedly appearing doesn't make existing parsers explode spectacularly), and the top-level structure is essentially irrelevant.
Of course, it's a significant hurdle to newcomers to get familiar with that outer layer, but I don't get the sense that catering to these is a particular focus by either network. ISO 8583 is also a great moat (one of many in the industry, really) for existing processors, which have no buy-in to switch to a new standard and the networks need to at least somewhat keep happy.
I thought that chip-in EMV was bad until I saw some of the stuff coming out of Discover cards for contactless EMV. Buying a test card set from somewhere like B2 Systems was very beneficial even just integrating an EMV reader from a hardware device to a payment processor.
The problem is that the contactless stuff is all custom per network.
Some of the implementations are reasonably close to contact EMV; others might as well be a completely different stack and technology.
In this world and age of AI, having this kind of inside knowledge that is scattered, usually behind paywall and nda, and always to be updated, is a real advantage.
Because no LLM will be able to replace you for quite a while.
this is the way. Shove everything into field 47.
dear god will I never forget all of these terrible details
correct. which is why people prefer to buy the 8583 implementations.
like https://jpos.org/
Having been involved in several ISO8583 implementations/integrations, it's really quite wild how different each one was in both structure and required content from one another.
The type of protocol (message type, bitmap to define fields, followed by a set of fixed and variable length values) is pretty normal for the time it was developed in. Many low level things are basically packed C-structs with this type of protocol. It comes with some pitfalls on the receiver side to be careful validating dynamic field length and refusing to read past end of message or allocate an infinite buffer. But all of those are well understood by now.
What I find baffling is that this "standard" does not specify how to encode the fields or even the message type. Pick anything, binary, ASCII, BCD, EBCDIC. That doesn't work as a standard, every implementation could send you nearly any random set of bytes with no way for the receiver to make sense of them.
> Many low level things are basically packed C-structs with this type of protocol.
Not really: C structs notably don't have variable-length fields, but ISO 8583 very much does.
To add insult to injury, it does not offer a standard way to determine field lengths. This means that in order to ignore a given field, you'll need to be able to parse it (at least at the highest level)!
Even ASN.1, not exactly the easiest format to deal with, is one step up from that (in a TLV structure, you can always skip unknown types by just skipping "length" bytes).
A bitmap to define field presence doesn’t seem so offensive, as far as serialization formats go. FlatBuffers[1] use a list of offsets instead, but that’s in the context of expecting to see many identically-shaped records. One could argue that Cap’n Proto with zero-packing[2] amounts to the same thing if you squint, just with the bitmap smeared all over the message.
I mean, this specific thing sounds like it should have been a fatter but much more unexciting TLV affair instead. But given it’s from 1987, it’d probably have ended up as ASN.1 BER in that case (ETA: ah, and for extensions, it mostly did, what fun), instead of a simpler design like Protobufs or MessagePack/CBOR, so maybe the bitmaps are a blessing in disguise.
[1] https://google.github.io/flatbuffers/flatbuffers_internals.h...
[2] https://capnproto.org/encoding.html#packing
Telegram's "TL" serialization, that's part of its network protocol, also uses a bitmap for optional fields. It's an okay protocol overall. The only problem is that the official documentation[1] was written by Nikolay Durov who is a mathematician. He just loves to overgeneralize everything to a ridiculous degree and spend two screens worth of outstandingly obtuse text to say what amounts to, for example, "some types have type IDs before them and some don't because the type is obvious form the schema".
[1] https://core.telegram.org/mtproto
I'd trade the field layer of ISO 8583 for some ASN.1 any day!
Luckily, there's a bit of everything in the archeological site that is ISO 8583, and field 55 (where EMV chip data goes, and EMV itself is quite ASN.1-heavy, presumably for historical reasons) and some others in fact contain something very close to it :)
As far as I'm concerned, we solved binary formats with ASN.1 and its various encodings. Everything afterwards has been NIH, ignorance, and square wheel reinvention.
I think ASN.1 is good but there are some problems with it. I think that it should not need separate type numbers for the different ASCII-based string types and separate type numbers for the different ISO-2022-based string types; you can use one number for ASCII and one number for ISO-2022; the restrictions will be a part of the schema and should not be a part of the BER/DER. Furthermore, I think they have too many date/time types. Also, some details of the other types (e.g. the real numbers type) are more messy than they should be if they are designed better.
I had made up the "ASN.1X", which includes some additional types such as: key/value list, TRON string, PC string, BCD string, Morse string, reference, out-of-band; and deprecates some types (such as OID-IRI and some of the date/time types; the many different ASCII-based and ISO-2022-based types are kept because a schema might have different uses for them in a SEQUENCE OF or SET OF or a structure with optional fields (even though, if I was designing it from the start, I would have not had many different types like that)), and adds a few further restrictions (e.g. it must be possible to determine the presence or absence of optional fields without looking ahead), as well as some schema types (e.g. OBJECT IDENTIFIER RELATIVE TO). (X.509 does not violate these restrictions, as far as I know.)
I also have idea relating to a new OID arc that will not require registration (there are already some, but this idea has some differences in its working including a better structure with the working of OID); I can make (and had partially made) the document of the initial proposal of how it could work, but it should need to be managed by ITU or ISO. (These are based on timestamps and various kind of other identifiers, that may already be registered at a specific time, even if they are not permanent the OIDs will be permanent due to the timestamps. It also includes some features such as automatic delegation for some types.)
There are different serializations formats of ASN.1 data; I think DER is best and that CER, JER, etc are no good. I also invented a text-based format, which can be converted to DER (it is not really meant to be used in other programs, since it is more complicated than parsing DER, so using a separate program to convert to DER will be better in order to avoid adding such a complexity into programs that do not need them), and I wrote a program that implements that.
ASN.1 DER, BER, or OER? Implicit and optional can really break compat in surprising ways. Then there are the machine unfriendly roster of available types. XDR was more tuned for that.
Finally free tooling doesn't really exist. The connection to the OSI model also didn't help.
> ASN.1 DER, BER, or OER?
Or XER or JER! One of the brilliant things about ASN.1 is that it decouples the data model from the serialization format. Of the major successor systems, only protobuf does something similar, and the text proto format barely counts.
> Implicit and optional can really break compat in surprising ways
Any implementation of any spec can be broken. You could argue that the spec should be simpler and require, e.g., explicit tagging everywhere, like protobuf. Sure. But the complexity enables efficiencies, and it's sometimes worth making a foundational library a bit more complex to enable simplifications and optimizations throughout the ecosystem.
> Then there are the machine unfriendly roster of available type
Protobuf's variable-length integers are machine-friendly now? :-) We can always come up with better encoding rules without changing the fundamental data structures.
> Finally free tooling doesn't really exist.
What do you mean? You use ASN.1 to talk to every server talking SNMP, LDAP, or the X.509 PKI. Every programming environment has a way to talk about ASN.1.
> The connection to the OSI model also didn't help.
Agreed. The legacy string types aren't great either. You can, of course, do ASN.1 better. No technology is perfect. But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple and shunting complexity and schema-ness that belongs in a universal foundation layer into every single application using the new "simple" thing.
> ASN.1 DER, BER, or OER? Or XER or JER!
My opinion is that DER is better. (However, DER is a restricted form of BER; any DER file is also a valid BER file, but has certain requirements of the encoding, so that it is a canonical form (the other canonical form is CER, but my opinion is DER is better).)
> Every programming environment has a way to talk about ASN.1.
Not all implementations are well-designed, though; I can see many implementations of ASN.1 that are not well-designed. I made up my own, to hope to be better, but we will see what is (hopefully) better.
> But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple
I agree with this, and it is important. This is my intention when I was designing stuff, to not be too simple nor too complicated; most stuff I see tends to be either too complicated or too simple, so I try to make stuff better than that.
There are zero free ASN.1 compilers or module checkers.
> There are zero free ASN.1 compilers or module checkers.
I must be misunderstanding what you're saying, because this exists: <https://www.erlang.org/doc/apps/asn1/asn1ct.html#>
From the linked page:
> asn1ct
> ASN.1 compiler and compile-time support functions
> The ASN.1 compiler takes an ASN.1 module as input and generates a corresponding Erlang module, which can encode and decode the specified data types. Alternatively, the compiler takes a specification module specifying all input modules, and generates a module with encode/decode functions.
XML also decouples the data model and serialization with the XML Infoset specification.
Something similar is TLV which is extremely common in binary network protocols, because it's very flexible for compatibility.
A lot of payments chatter on here recently and patio11 throwing out some great content as well. May I ask where this pretty visual explanation website was 25 years ago? ;) Oh the woes of programming ISO8583 as I see another commented on EBCDIC which adds in a whole other level of mind numbing when passing between the endians. It was a fun experience however back in the early 2000s when I worked in isolation with Discover card to get the GUID field added to the ISO8583 specification.
We are living in changing times on many fronts and the worlds financial systems is one of those new battlefields. Many are ignorant as to what is occurring but with big tech owning their own payments ecosystems this should be insight for others not aware as we are absolutely certain to see more following their lead. Some of those others following are entire countries, they are just a bigger business after all, as it is already happening for those aware and a small select few are doing i.t.
Stay Healthy!
I learned a lot more about this discussing the PCI/DSS [0] regulation framework here [1]. It's about to change to a new 4.0 in 2025 which means that to use or run any payments system you'll have to meet ever more stringent regulation. This is going to start applying to other pseudo currencies (in game value tokens etc) if they exceed certain value and scale. At present Visa and Mastercard have a big stake in defining this (capturing the regulator).
Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this, which seems a boost for paper technologies.
[0] https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Sec...
[1] https://cybershow.uk/episodes.php?id=36
[2] https://brixtonpound.org/
There is some confusion in that comment.
- PCI DSS 4.0 is already in place and to be retired on December 31, 2024. PCI DSS 4.0.1 is the replacement and I place already.
- PCI DSS 4.0.1 and game tokens have nothing in common. The applicability of PCI DSS requirements are decided by card brands, aka Visa, Mastercard, etc. And it is the acquirers to enforce on the third party service providers to enforce the standard. Standard itself has no power on anyone.
- Mastercard and Visa have high stakes because technically they are the regulators. EMV Co, the core of the payments was built by Europay (later acquired by Mastercard), Mastercard and Visa. The M and V of it are managing the chip on cards, online payments and much more. PCI SSC is merely a supervisory authority who sets the standard, the process of assessments and investigations on behalf of these brands.
Side note: While the other card brands accept PCI DSS as an entry level requirement, they do not have as much saying on it as Mastercard and Visa.
* in place
PCI-DSS is an industry standard, not a law. If you don't think it should apply to your domain, complain to your legislators/regulators, not the authors of PCI-DSS or the payment industry covered by it!
> Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this
And so do countless other digital (non-real?) payment systems across the globe. That's not to say that there aren't any other security regulations, but they're also most certainly not in PCI scope.
Arguably, the original sin of the card payments industry in particular, and US American banking in general, is treating account numbers as bearer tokens, i.e. secret information; if you don't do that, it turns out that a lot of things become much easier when it comes to security. (The industry has successfully transitioned of that way of doing things for card-present payments, but for card-absent, i.e. online, card transactions, the efforts weren't nearly as successful yet.)
I wonder if this is the standard that drove Charles Stross slightly insane and led to Accelerando.
https://www.antipope.org/charlie/blog-static/fiction/acceler...
Actually based on the timing, this is probably the new better standard that replaced the obscure protocols of the 70s.
Oh, this format was fun. You could see history unfold when parsing it. The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD. But one field contained XML. And the XML had an embedded JSON. The inner format matched the fashion trend of the year when someone had to extend the message with extra data. :-)
> The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD.
The "great" thing about most ISO 8583 implementations I've worked with (all mutually incompatible at the basic syntactic level!) is that they usually freely mix EBCDIC, ASCII, BCD, UTF-8, and hexadecimal encoding across fields.
Fascinating, I don't think I've ever seen an XML field! Do you remember which network that was for?
We were the issuer. So these were probably the payment processor's extensions. But we were issuing MasterCards.
It has been fun seeing all the different ways companies have come up with to work around the limitations of ISO 8583. One I’ve been seeing a lot lately is making an API call before/after the ISO message (with non-PCI data) to confer additional information outside of the payment transaction. Definitely speeds up time to market, but opens up a whole new array of failure modes to deal with.
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
As others say, it's not a matter of Visa VS Amex. I use both a Mastercard and a Visa with a neobank in Europe, and I get instant notifications. Must be more something to do with the bank (US banking is famously so behind, but I also see days-long delays with traditional european banks).
Even more magical: when sending money to someone I'm physically present with, I hear their notification before the "money sent" animation finished on my own phone
It's probably less about layers and more about the different number of stakeholders.
Visa/MC transactions go through at least four systems (merchant, merchant payment service provider/acquirer, Visa/MC, issuer processor); Amex is merchant acquirer and card issuer in one, so there is no network and accordingly at most three parties involved (merchant, merchant PSP, Amex).
That said, some of my Visa/MC cards have very snappy notifications too. In principle, the issuer or issuer processor will know about the transaction outcome even before the merchant (they're the one responding, after all), and I definitely have some cards where I get the notification at the same time and sometimes before the POS displays "approved".
I have a visa card with a Canadian bank and get transaction messages within 5 seconds of payment usually. Maybe it is a per bank thing?
It's a per country thing. Card txns in the US are bananas arcane byzantine nightmares. (Source: worked at Canadas largest bank on txn processing software).
I also get transaction notifications at a similar speed in the UK, in pretty much all of my bank accounts.
AMEX is the bank. For Visa/Mastercard, the latency is probably due to the bank they have to route the transaction through.
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Some smaller banks upload available balances to processors and perform clearing later in a back office only. They just don't have a hook to link a notification and send it only after the actual settlement.
>> Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Well thats sort of the thing...with Visa and MC, there is an extra layer or two of the bank or Fidelity Information Services. With Amex, they own the full stack end to end.
Your card limit gets checked on every transaction. There doesn't seem to be a technical reason why information flow back to me should be limited in any way. If the extra layer fails to work the transaction fails to pass.
I remember being charged after a while when paying for bus/metro tickets in some places, I think those machines process transactions by batches or something.
> Your card limit gets checked on every transaction.
Nope. The merchant can choose the level of verification - in some cases, like copying the card with an imprinter [1] or running phone transactions (yes, that is possible - it's called MOTO [2]), it's obviously impossible to check card limits.
Downside of CNP transactions is, the merchant is fully liable for anything from fraud over chargebacks to exceeding limits.
And then you got card-present transactions but the network connectivity is down for whatever reasons... been a while since I messed with that, but at least for German cards you could configure the terminal to store the account details for later submission when connectivity was restored.
[1] https://en.wikipedia.org/wiki/Credit_card_imprinter
[2] https://docs.adyen.com/point-of-sale/mail-and-telephone-orde...
Great article that exposes why ISO20022 will replace 8583 over time, especially in areas not dominated by the M/V monopoly networks.
Credit cards, with all their nonsense about cash backs and rewards can be imnplemented in the new payment systems with banks offering line of credit accounts that are part of the appropriate "national payment system", like UPI, PromptPay, Osko/PayID, FedNow etc.
Instant settlement between accounts, low cost fixed price txns etc.
Fun anecdote: Thailand's entire banking network (including regular wire transfer) was implemented with ISO 8583 (!). Part of the AnyID master plan (later renamed to PromptPay) was to replace the country's (ab)usage of ISO 8583 with ISO 20022. The Ministry of Finance hired a UK-based Vocalink to build this converter, among with other systems MoF hired them for. (AFAIK, the entire stack was written in Erlang.)
We’ve had a lot of success with our Go library for iso8583
https://github.com/moov-io/iso8583
> "ISO 8583: The language of credit cards"
"ISO 8583: The language of both debit and credit cards"
And sometimes even bank transfers (I believe at least FPS in the UK used it, or possibly still does).
Also don't forget about prepaid cards, charge cards etc., depending on where they exist in your personal ontology of card funding methods ;)
Fun times reviewing the masking logic of credit card data spewed out in system logs, in base64-encoded (or god forbid, EBCDIC-encoded base64-encoded) ISO 8583.
(In the holiday spirit)
The only language of credit cards is points, cashback, APYs, and hard to read TOS