Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

Channel Description:

Chris McCormick - News

older | 1 | .... | 8 | 9 | (Page 10)

    0 0

    Depiction of decentralized network

    In 2014 Arvid Norberg and Steven Siloti came up with a BitTorrent extension called BEP44. The basic purpose of BEP44 is to allow people to store small pieces of information in a part of the BitTorrent network called the DHT. The DHT ("distributed hash table") is a key/value lookup table that is highly decentralized. Prior to BEP44 it was used to look up the IP addresses of peers from the hashes of the torrent they were sharing.

    BEP44 introduces two new ways of storing key-value data in the DHT. The first is the ability to look up small bits of information keyed directly on their hash. The second allows for the storage of cryptographically authenticated data, keyed on the public key. What this means is if you have some public key K then you can look up authenticated blobs of data stored in the DHT by the owner of that key. This opens up a variety of useful abilities to people building decentralized applications.

    For instance when you distribute a file on BitTorrent it is immutable - you can't change it - but BEP44 provides a way to tell people "hey, there is a new version of this file that I shared before which you can find over here" in a secure and authenticated way. It turns out that this basic mechanism can be used to build a wide variety of decentralized, authenticated functionality and people have built cool experiements like a decentralized microblog using it.

    The purpose of this post is to show you how the datastructure works and how to apply it more widely in your own software.

    If you just want a working implementation you can use, check out the decentral-utils JavaScript library which has a single-function implementation of the datastructure and algorithm which you can use in the browser. Web browsers unfortunately do not have direct access to the BitTorrent DHT, but the library is still useful for authenticating up-to-date data blobs that are passed around between browsers, for example over WebRTC.

    What's good about BEP44

    Most of the advantage of BEP44 comes from the cryptographic signing. What this offers is a way for somebody (let's call her Alice) to verify that a piece of data from somebody else (let's call him Bob) is authentic and that it has not been tampered with. You can do cryptographic signing without BEP44 of course.

    However, the BEP44 specification brings some other features for building decentralized systems:

    • Namespacing against a public key.
    • Replay attack prevention.
    • A compare-and-swap primitive.

    The namespacing feature means that a single public key can have a bunch of different data blobs that others can authenticate. Because the datastructure is keyed on both the public key and a "salt" field, a single public key can store multiple blobs with different "salt" values as a sub-key.

    Replay attack prevention is accomplished with the sequence field, which in BEP44 is a monotonically increasing integer. The way a replay attack works is some adversary keeps an old copy of something you have signed and then when you are offline they send it again and pretend its a new message. Because the message is signed with your key people are fooled by the adversary into thinking the old data is current. In BEP44 the sequence field prevents this because the adversary will have a data blob with a lower sequence number than the last one you shared and they can't generate a new blob with a higher sequence number because they do not have your private key. So the replayed data blob will be rejected.

    Finally, the compare-and-swap primitive works by specifying a previous sequence number that the current data blob should replace. Peers will reject data blobs which don't replace the current sequence number they hold. In this way it's possible to acheive basic synchronisation and atomicity of data blobs. Using compare-and-swap guarantees that the new value you want to insert into the DHT can be calculated based on up-to-date information.

    Example usage

    Imagine you take upon yourself the small task of building a decentralized social network. Users have a timeline of posts which they have written. When they write a new post it should be appended to their timeline and people should see the updated timeline with the new post.

    In the centralized social network case this is easy. The social network provider TwitFace has an internal copy of the poster's timeline to which they add the new post. Followers are notified of the update and the updated timeline is sent to them by the provider. The way the authentication works in this case is the original poster has logged in with their password and so the provider knows who they are, but the receivers of the update must trust the provider.

    Depiction of centralized post authentication

    How do readers know that the post is authentic and comes from the original poster? The reader must trust the provider completely. They must trust that the provider is showing them the authentic timeline of messages, that the messages have not been modified, that fake messages have not been injected into the timeline, that new messages have been added to the feed in a timely fashion, and that no message has been censored and removed from the feed by the provider.

    The problem with this model is that trusted third parties are security holes. Centralized social networks do modify things people have said. They do inject posts into people's timelines (ads and worse). They do change the order and timing of posts to suit their own goals. Perhaps worst of all, they do censor posts completely.

    I won't get into the politics of censorship. Suffice it to say that censorship is fine and good right up until the point where your values differ from the entity doing the censoring. Values change, mysterious outside influences are myriad, and few people have complete alignment of values even with our noble corporate overlords in Silicon Valley.

    The basic goal here is that if you've chosen to read somebody's timeline you can trust that the feed of posts you are reading from them is authentic and complete and as they intended.

    Happily, we can use the BEP44 technique to route around the trusted-third-party centralized-social-network-provider security hole. BEP44 provides a way to receive an update and verify it cryptographically. It provides a way to know that this is the latest and complete version. It allows the verifier to do this using only the received data structure without requiring any kind of secret server side logic or password based authentiction like you would find in a centralized social network.

    Depiction of decentralized post authentication

    How it works

    At the heart of the algorithm is a small datastructure which can be shared, updated by the sharer, and then shared again. Receivers can verify the updates are authentic. The idea is that you can share a small piece of authenticated data which points to a larger piece of immutable data. For instance, you can share an authenticated datastructure with the hash of a torrent containing all of your posts (an RSS feed for instance). Receivers then know that torrent is the current representation of your timeline of posts.

    The datastructure has the following fields:

    • value
    • seq
    • cas [optiona]
    • salt [optional]
    • pubkey
    • signature

    When a verifier receives a copy of this data structure they perform the following checks:

    • Is the size of the value field below the maximum size?
    • Does the value conform to the expected format / encoding?
    • Is the seq number higher than the last seq number I saw (if any)?
    • If cas is present does it match the previous seq number I have?
    • Is the attached signature made with the private key corresponding to the attached pubkey over the datastructure's value, seq, and salt fields?

    In this final point they are checking that the structure has been digitally signed with the author's private key. This is accomplished by concatenating salt + seq + value and then checking that the signature is valid for that data and the given public key.

    In this way verifiers can know that an update from a known keypair/identity is authentic and current.

    Read more posts on the subject of cryptography & decentralized systems.

    0 0

    I just released Hacksilver, a new album of procedurally generated music.

    It uses a whole slew of weird tech to generate the beats, melodies, synth sounds including beat-generating LISP, 8-bit synth generating Javascript, and Pure Data for the mixing and mastering. One thing that was particularly fun was procedurally generating Impulse Tracker files.

    Would appreciate a re-share if you know of anybody who might be into this type of thing.


    0 0


    Yesterday I released Hacksilver, an album of procedurally generated "algorave" music. Some people had questions about the technology used to write it so I thought I'd write this up.

    The beats and melodies were generated using drillbit, a LISP codebase written in a Python variant called Hy. The project outputs Impulse Tracker mod files which are then played and mixed live.

    The interesting parts of that codebase are in the generators folder. For example the drill-n-bass choppage generator is here.

    Each generator has three functions:

    • make-sample-set: which generates IT wav tables that are used by the generator (e.g. individual drum kit or synth sounds)
    • make-pattern-settings: which sets up parameters & context that will be re-used by the pattern generator to provide similarity across pattern variations
    • make-pattern: which outputs the pattern data in a format easily consumed by the Impulse Tracker file writer

    Mixing and live-effects are performed in Pure Data. Originally I was using a fully software based mixer. However I discovered that a nicer mode of operation is to have individual bits of sound generating/filter hardware chained together. So I started using this Raspberry Pi based mixer + FX unit from another project to mix live.

    One other bit of software in there is jsfxr which is wrapped by the LISP code and outputs 8-bit synth sounds (which are then used by the pattern generator). Because the synth definitions are simple JSON hash maps there is a fun pseudo-evolutionary technique I was able to use where you interpolate between the values of two synth definitions to generate new sounds based on two synth definitions that you like.

    Hardhat tracker module 

    I also built a little hardware Impulse Tracker renderer based on a Raspberry Pi running XMP with my friend Dimity. It has a Pocket Operator style sync output and runs directly into the mixer that both share the same timing and the fx can be quantised to the music which is playing.

    If you're interested in the music hardware that Dimity and I are building and selling you can stay updated at bzzt.studio.

    In the image at the top of this post the hardware Impulse Tracker renderer is the little box on the right hand side. The RPi mixer/fx unit is to the top right of the C64 keyboard. The Korg Nanokontrol2 strapped to the C64 keyboard is controlling the fx and mixing parameters on the RPi. They keyboard itself was for playing live synth sounds (a very simple arpeggiating subtractive synthesizer built in Pd).

    0 0
  • 07/31/19--20:19: Joplin With Self-hosted Sync
  • For some time I have been looking for a writing solution with the following properties:

    • Lets me review and make minor edits on my phone.
    • Is synched to my laptop where I can write longer form.
    • Supports simple markup such as Markdown.
    • Supports attachments and images.
    • Is Free & Open Source Software and can be self-hosted.

    The solution is Joplin.


    It's a wonderful piece of software. There are apps for all of the usual platforms, including a direct link to the Android apk, which is a blessing if you are somebody who opts out of using Google services.


    Some other things which are great about Joplin:

    • You can edit notes in an external editor.
    • You can paste images directly into your document.
    • It imports and exports many formats including Markdown.
    • It can export individual articles to PDF.
    • Its native export format "JEX" is a simple tar file.
    • Its native data store is on-disk.
    • Sync is optional and very easy to set up.
    • Sync uses the widely supported webdav protocol.

    Joplin Sync

    I got sync between my devices working quickly by using Piku to deploy a simple webdav server to my Piku VPS. If you want to do this yourself, check out the latter repository and then push it to Piku as follows:

    git remote add piku piku@MYSERVER.NET:webdav
    git push piku master
    piku config:set NGINX_SERVER_NAME=WEBDAV.SOMEDOMAIN.NET PASSWORDS="username:password username2:password2 ..." FOLDERS="/joplin:/home/piku/joplin"

    After that is up and running you can configure Joplin sync by selecting "webdav" on each device and then enter the URL WEBDAV.SOMEDOMAIN.NET/joplin/ and the username:password pair you specified above.

    I wrote this post with Joplin.

    0 0
  • 08/04/19--02:06: Goomalling
  • Image1802377562.jpg Image530038439.jpg Image1309211065.jpg Image-95081980.jpg Image-206257949.jpg Image-1571672220.jpg Image1085423041.jpg Image531224935.jpg Image-651670289.jpg Image-1045430088.jpg

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    0 0

    Recently I've been hacking on a game engine for infinitelives called px3d.


    It's built on top of ClojureScript, Blender, and Three.js and it runs in the browser.

    One feature I'm particularly happy with is the live-reloading of Blender assets into the game. You hit "save" in Blender and the updates appear in the running game a second later - no need to re-compile or re-load the game.


    The way this works is with a background script which watches the assets.blend file. It re-builds the assets.glb whenever it is modified, and writes the hash of the file into assets.cljs. Figwheel pushes changes to the compiled cljs files whenever they change, and there is another bit of code which tells three.js to re-load assets.glb if the hash has changed.


    Infinitelives is the vehicle me and my buddy Crispin use to make games and tooling, mostly for gamejams. The gamejam format is great because it is time-boxed, which means we can periodically do this self-indulgent thing we enjoy without taking too much family or work time.

    Gamejams are typically only 48 hours long and so we have learned some good techniques for shipping working code under extreme constraints. A hardcore economy of time, resources, and scope is required.

    ClojureScript & Figwheel are perfect for this with their hot-loading of modified code. I built the tight Blender re-load integration for the same reason. Hand drawn graphics consume a lot of time during jams and this should help us really level up on the content side of things.


    If you'd like to find out when we release games and new tools you can sign up to our release notifications on the infinitelives home page or follow us on Twitter.

    0 0

    In November 2015 Nick Szabo gave a talk on the history of the blockchain which was dense with useful ideas.


    Here are some notes I took on his talk:

    • Philosophical inspiration to Cypherpunks who invented Cryptocurrency:

      • Ayn Rand: Galt's Gulch - independence from corrupt institutions.
      • Tim May: "protect yourself with cryptography" (cyber Galt's Gulch.)
      • Friederich Hayek: Institutions of property, contracts, money are actually important to human freedom.
    • Use computer science to minimize vulnerability to strangers.

    • Non-violently enforce the good services of institutions.

    • "Try to secure as much as possible" not just communication.

    • Cryptography: only secures communications from 3rd parties.

    • David Chaum: let's apply this to money too.

    • Centralization problem remained in digital cash startups.

    • Bad assumptions in computer security: trusted third parties like certificate authorities are secure.

    • Trusted third parties are security holes.

    • Centralization is insecure.

    • E.g. Communists were able to get stranglehold with just control of railroads, newspapers, radio.

    • Gold is insecure

      • Spanish looted Aztec gold, pirates looted Spanish gold.
      • Part of end of gold standard was German U-boat threat to British gold transportation.
      • Franklin Roosevelt's government confiscated gold.
      • In modern times xray machines detect gold easily.
    • Decentralization per computer science is much more automated & secure than traditional security.

    • CS decentralization can only replace small fraction of traditional security but with very high cost savings.

    • Traditional security isn't the protocol itself, requires strong external law enforcement.

    • Computer security can be secure across national borders instead of siloed inside jurisdictions.

    • Cryptocurrency helps solve this through decentralization.

    • Separation of duties: several independent people to perform a task to get it done.

    • Each node as independent as possible.

    • E.g. crude measure of independence: geographic diversity of nodes.

    • Number of nodes is only a proxy measure of decentralization.

    • Smart contract:

      • Long lived process or "distributed app".
      • Acts like a contract.
      • Performance, verification etc.
      • Generally 2 parties + blockchain (replacing TPP).
    • Wet code = traditional law. Dry code = smart contract.

    • Law is subjective, enforced with coercion, flexible, highly evolved.

    • Smart contracts are mathematically rigorous, cryptographically enforced, rigid, very new.

    • Law is jurisdicionally siloed, and expensive to execute.

    • Smart contracts are super-national & independent and low cost.

    • Seals in clay/wax were important when writing was invented: signature + tamper evident.

    • Modern seals at e.g. crime scenes: sealing door, evidence bag with numeric identifier.

    • Blockchain can keep secure log with both semantics (serial number) and proof of evidence (photo hash).

    • Put proof of evidence on blockchain as well as semantic reference for contract code to interface with.

    • Can secure physical spaces with same mechanism.

    • Proplets: blockchain can tell them which keys have which capabilities.

      • For almost any valuable property that can be controlled digitally
      • Example: Auto-repo collateral upon contract breach.
      • Example: creditors without access to offshore oil rig used as collateral.
    • Recent project:

      • Trust minimized token: secure property titles, colored coins. Securing transfer of ownership.
      • Trust minimized cash flows (dividends, coupons, etc).
    • Idea: social networks for blockchains. Execute payment swaps & smart contracts after linking social accounts together.

    • Let's try to think about security more broadly instead of only encryption.

    • Let's try to protect everything that's important to us, without centralization.

    0 0
  • 09/07/19--23:54: Droneship Study
  • a9bbb77a696a84276d8e05ed6a6867b1.png

    Droneship. Study in the style of thisnorthernboy.

    0 0

    I've got three conference talks coming up in Perth (Australia), London, and The Gold Coast (Australia). If you're nearby let me know - I would love to buy you a coffee/beer and hear what you're up to.

    Security BSides

    This Sunday, September 22nd, Perth, Western Australia.

    I'm presenting "Bugout: practical decentralization on the modern web." It's a talk about the library I built on top of WebTorrent for building web based decentralized systems.

    Bsides Perth Logo

    Clojure eXchange

    December 2nd-3rd, London, UK.

    I'm giving a keynote: a show and tell of the multitude of strange things I've been building with the Clojure[Script] family of programming languages, and how Clojure enables the bad habit of starting way too many projects. I'll also give an update on Thumbelina, the tiny MIDI controller I've been working on with my friend Dimity.

    Skills Matter Clojure eXchange


    January 13th-17th, Gold Coast, Australia.

    I'm talking about Piku, and how it helps you do git push deployments to your own servers. I've made a bunch of contributions to this open source project in recent months. I've personally found a huge productivity gain from being able to deploy internet services without having to think too much, and I'm excited to show others this too.

    Linux Australia Logo

    0 0

    Hy(lang) is a LISP-family programming language built on top of Python. You get the rich Python language & library ecosystem, with a LISP syntax and many of the language conveniences of Clojure, such as reader macros for easy access to built in data types. What's not to love?


    I recently found out I have the most GitHub stars for projects written in Hy of any developer worldwide. With this admittedly ridiculous credential in hand I'd like to offer some opinions on the language.

    I really like Hy a lot. I prefer writing code in Hy to writing code in Python, and I am writing this post because I want to see Hy do well. I originally wrote this as a list of things in Hy that could possibly be improved, from the perspective of an end user such as myself. Then my friend Crispin sent me this fantastic video by Adam Harvey: "What PHP learned from Python" and I realised that all of the issues I have with Hy stem from the same basic problem that Adam talks about.

    Here is the crux of the problem: I've written a bunch of software in Hy over the past few years and often when I moved to a newer point-release of the language all my old code breaks. My hard drive is littered with projects which only run under a specific sub-version of Hy.

    gilch[m] It might depend on the Hy version you're using.

    I understand that the maintainers of the language, who are hard-working people doing a public service for which I'm deeply grateful, are concerned with making the language pure and clean and good. I understand that languages have to change to get better and they need to "move fast and break things" sometimes. That's all fine and good.

    However, I think Adam Harvey's point stands. If you want users to use and love your language:

    • Break things cautiously
    • Maintain terrible things if it makes life better
    • Expand the zone of overlap [backwards compatibility between consecutive versions]

    I think if you can maintain backwards compatability you should.

    Almost all of the breakages I've experienced between Hy versions could have been avoided with aliasing and documentation. Not doing this backwards compatibility work basically tells your users "don't build things with this language, we don't care about you." I know that is almost certainly not the attitude of the maintainers (who are lovely, helpful people in my experience) but it is the way it comes across as an end user.

    Here is a list of things which have changed between versions which blew up my Hy codebases each time I upgraded Hy:

    • Renaming true, false, and nil to the Python natives True, False, and None. These could have been aliased to maintain backwards compabitility.

    • Removing list-comp and dict-comp in favour of the excellent lfor, dfor, and friends. Again, small macros could have aliased the old versions to the new versions.

    • Removing def in favour of setv. A very small macro or maybe even an alias could have retained def as it is pretty much functionally identical from the perspective of an end user.

    • Removing apply, presumably in favour of #**. Support could have been retained for the functional apply. There are situations where a proper apply is favourable.

    • Removing the core macro let. It seems there was an issue where let would not behave correctly with generators and Python's yield statement. A more user-friendly solution than chopping the imperfect let from the default namespace and breaking everybody's code would have been to document the issue clearly for users and leave the imperfect let in. I know it is available in contrib. Moving it to contrib broke codebases.

    Having your old code break each time you use a new version is frustrating. It makes it hard to justify using the language for new projects because the maintainance burden will be hy-er (sorry heh).

    Some other minor nits I should mention which I think would vastly improve the language:

    • loop/recur should be core. Like let these are available in contrib, but that means you have to explicitly import them. Additionally it would be super nice if they were updated to support the vector-of-pairs declaration style of let and cond etc.

    • Why does assoc return None? This is completely unexpected. If there are performance issues then create an alternative which does what Python dict assignment does (aset?) but it seems unwise to break user expectations so fundamentally.


    It is much easier to provide criticism than it is to write working code. I am sorry this is a blog post instead of a pull request, and I hope this criticism is seen as constructive. I want to thank everybody who has worked on hy. I am a huge fan of the language. It is an amazing piece of software and I am very grateful for your work.

    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    0 0

    An economist told me the worst part of her job is turning Excel data into HTML tables, so I built an add-in to fix it.

    Many software developers probably don't realise that Microsoft Office add-ins these days are simply web pages which run inside of a panel in the UI. I suppose it was to be expected given the trend of the last couple of decades towards web based everything, but it still came as a surprise to me when I had to build one for a job. In my mind Office was still back in the world of COM objects and Delphi and DLLs.

    After discovering how easy it is, I decided to do it again for this side project. Here is how I did it.

    The plugin (whoops, "add-in") I built is called "Excel to HTML table" and does what it says on the tin. You make a selection of cells, click "copy", and you get a clipboard full of the corresponding HTML code that will render those cells. After that you can paste the code into your text editor and use it in a site's HTML.

    Excel to HTML table screencast

    Microsoft have some nodejs and yeoman projects to help you get up and running but I'm the sort of developer who likes to roll their own and keep things tight, tidy, and tiny. I like to build things from first principles so that I can understand what is going on at as low a level as possible. Here's what I discovered.

    The Microsoft Tutorial has a lot of good info in it, but it's geared towards people using either Node + Yeoman or Visual Studio. If you're a text-editor-and-command-line person like me the best way to get started is just to grab the example .html, .css, and .js snippets from the Visual Studio version.

    Get a dev server running

    Any local HTTPS server will do. The difficult part is it must be HTTPS, even on localhost, or Word/Excel will refuse to load your add-in. My add-in is almost entirely client-side code and so I could get away with a small Python server using the built-in libraries like this:

    from http.server import HTTPServer,SimpleHTTPRequestHandler httpd = HTTPServer(address, SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, keyfile='selfsigned.key', certfile='selfsigned.cer', server_side=True) httpd.serve_forever()

    As you can see this looks for the .key and .cer files in the local dir, and you can create those with openssl:

    openssl req -x509 -newkey rsa:4096 -keyout selfsigned.key -out selfsigned.cer -days 365 -nodes -subj "/C=US/ST=NY/O=localhost/OU=Localhost"

    You can do the same thing in nodejs with express if that's your bag by passing the right options to https.createServer:

    const server = https.createServer({ key: fs.readFileSync('./selfsigned.key'), cert: fs.readFileSync('./selfsigned.pem') }, app).listen(8000)

    Before you can load your add-in you should make sure your browser has accepted the self-signed cert or the add-in won't load. Do this by browsing to your localhost:8000 server and bypassing the certificate warnings.

    If you're debugging in the native Office app instead of Office Online, you will need to do this in Internet Explorer for it to work as far as I can tell.

    Manifest validation


    Office finds out about the URL for your add-in using a "manifest" file. This file is XML and pretty fragile. You need to make sure it complies with the spec. Luckily Microsoft have a tool for verifying the add-in on the command line. Install the npm package "office-toolbox": "0.2.1" and then you can run a command like this:

    ./node_modules/.bin/office-toolbox validate -m ./path/to/manifest.xml

    This will report most issues that come up.

    In the manifest you can use https://localhost:8000/Home.html and it will point to your local dev server when running.

    The Code

    The code itself is basically web code. You can use libraries like React and jQuery in your UI. The exception of course is when calling the native APIs. These are exposed through an interface like Excel.run(...) and make heavy use of promises for async. You will often find yourself doing context.load() and then waiting for the promise to resolve before doing the next thing in the document. The API documentation is super useful for figuring out what is possible and how to do it.


    When it comes to iterating on your code and debugging, by far the easiest way is to use Office Online. This is because it is already in the web browser so debugging works the same way as you are used to - the add-in is just an iframe.

    I was even able to do my dev & debugging right at home in Firefox on GNU/Linux!

    At some point you will want to debug using an actual copy of Office running on Windows. I used Office 2016 on my wife's computer.

    If you are not a Windows developer the following tip will save you a lot of time when it comes to debugging native. It's called "F12 Developer Tools" and it's buried deep inside a Windows subdirectory in C:\Windows\System32\F12\.

    What this tool does is attach a "web console" type of debugger to your Office add-in instance running inside Office. You can do stuff like console.log and also see JavaScript errors which are thrown.

    Ready to roll


    Once you've got those pieces in place you should have a basic add-in up and running. After that it's all about referencing the docs to figure out how to do the thing you want your add-in to do.


    Finally when you're ready to ship, you submit your manifest.xml to the App Source "seller dashboard". Expect to wait a few days for the validation team to get back to you, and to have to fix things. This process was useful as they see the software with a fresh pair of eyes and give actionable feedback.

    LISP madness

    So finally I should mention my LISP obsession. I actually wrote all the code for this plugin in a LISP called Wisp. It's a Clojure-like that compiles to JavaScript and seems quite similar to ClojureScript but with three core differences:

    1. It lacks almost all of the great features & tooling of ClojureScript.
    2. It is closer to being "JavaScript with LISP syntax".
    3. It compiles down very small if you know what you are doing. How small? My final compiled .js bundle for this add-in is just 8.2k non-gzipped.

    So that's about it. I hope you got something out of this article on building Microsoft Office plugins using web tech.

    Now back to open source dev for me. :)

older | 1 | .... | 8 | 9 | (Page 10)