tag:blogger.com,1999:blog-60284495190625146922024-03-08T03:11:30.763-05:00ha4Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.comBlogger88125tag:blogger.com,1999:blog-6028449519062514692.post-17385874418265979932014-11-02T23:50:00.001-05:002014-11-03T00:32:45.223-05:00People 101After a long stretch of working from a cave, I am discovering the joys and horrors of working with a team of multiple people and one robot. How does one function? Where is the manual to <i>people</i>?<br />
<br />
I am still looking for a good one. In the meanwhile, some field notes:<br />
<ol>
<li>People reach different conclusions from the same data due to differences in judgement - disagreement does not always result from lack of understanding or poor communication.</li>
<li>People argue, and mostly about unimportant things, because due to (1) what <i>they </i>judge important <i>you</i> consider trivial.</li>
<li>Being an expert in a<i> </i>field does not entitle you to expert opinion on everything<i>,</i> something I have to remind myself often.</li>
<li>People make mistakes, both of professional and interpersonal nature.</li>
<li>People do a lot more of all the above when lacking proper sleep and rest.</li>
<li>I am no exception.</li>
</ol>
<div>
I am making some ghastly mistakes, requiring a sincere apology. I wish there was an easy way to cover these with regression tests.</div>
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-88114393513470297792014-08-02T20:25:00.000-04:002014-08-02T20:26:54.472-04:00TachyusI am joining <a href="http://tachyus.com">Tachyus</a> F# team starting next week, and moving to San Mateo, CA. I am both excited and a little sad. During my time at <a href="http://intellifactory.com">IntelliFactory</a> I learned pretty much all the skills I now have as a programmer. I met, worked with and learned from amazing hackers. It is time for me to move on, but IntelliFactory will always be a very special memory for me, and Budapest a special place. Thanks Adam Granicz, Diego Echeverri, Joel Bjornson, Loic Denuziere, Andras Janko, and many others for an amazing time.<br />
<br />
<br />
<br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-2343923895546588402014-07-25T10:31:00.000-04:002014-07-25T10:31:12.320-04:00UI.Next available<a href="https://github.com/intellifactory/websharper.ui.next">UI.Next</a> is available for experimentation as a public <a href="http://www.nuget.org/packages/WebSharper.UI.Next/">WebSharper.UI.Next</a> NuGet package. You can build <a href="http://intellifactory.github.io/websharper.ui.next/#SimpleTextBox">examples</a> from <a href="https://github.com/intellifactory/websharper.ui.next">source</a> - just get <a href="http://websharper.com/">WebSharper</a> first.<br />
<br />
UI.Next addresses most shortcomings we felt WebSharper had for single-page JavaScript applications. The most interesting part is a dataflow model integrated with DOM for defining reactive UI, but we also provide support for client-side routing and animation.<br />
<br />
If you play with it, <a href="https://twitter.com/Simon_JF">Simon</a> and I will be very interested in your feedback. Next week we plan to do a few more samples, to cover more animation, interaction with mouse and keyboard. We find doing these examples is a very helpful way of coming up with better API.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-3882901981099951442014-07-22T17:46:00.000-04:002014-07-28T08:32:51.844-04:00The fatal attraction of FRPFor about a year or so, I have made Functional Reactive Programming (FRP) taboo. I cringed at every mention of it. I think this was a mental self-defense reaction, my extended immune system was sending me a signal that I spent too much time thinking about this subject without any tangible results.<br />
<br />
The fatal attraction of FRP is its simplicity. Semantics are beautiful and clear. There are Behaviors, functions of time, Events, timed occurrences, and they all dance together. You write causal transformations (future values depend on past values, not vise versa), and it works. You use equational reasoning to transform your program. You get inspired by Conal Elliott's papers.<br />
<br />
However, how do we implement it? Many a ship has foundered in these waters.<br />
<br />
Perhaps you do not even have to. People do different things to really "get," obtain an operational understanding of something. Mathematicians like denotational semantics. They understand object A by explaining it in terms of object B we already know. Only programmers like to implement things, but then they can use an existing implementation to play with. Once I play with Impl(A), I "understand" A.<br />
<br />
At the Budapest functional programming beer night I used to go to, we had the fortune of having Gergely Patai and Csaba Hurska. These guys have built simply amazing Haskell projects, the Elerea FRP library, and the LambdaCube purely functional 3d engine. Gergely is a mathematician. He explained FRP to me by reducing things to Cartesian closed categories. As you guess, it did not help me - to this day my understanding of those things is shaky. Csaba is a little more of a programmer, he told be Haskell is a hacker's language. Both, now that I think of it, made fun of me for talking about dependent types being the future of programming, with good reason, as I now know. But I digress.<br />
<br />
At a very high-level, problems with implementing FRP in a language like Haskell or F# are these:<br />
<br />
<ol><li>It is quite hard to enforce causality of transformations with a vanilla type system. AFAIK most practical systems either design a new language and type system, or just leave this invariant unchecked. </li>
<li>If event streams are first class values, this can easily create memory leaks in a higher-order program, if the entire unfolding history is retained. Solutions here are: do not give the user first-class event streams (but only their transformers, or something like that), create a new language and type system that rules out those nasty higher-order programs (Elm.js), or use some kind of combination of convention, types, and clever implementation (weak pointers etc), to make it work.</li>
</ol><div>Much as I despair getting FRP right, I am coming back to the topic as we are working on WebSharper <a href="https://github.com/intellifactory/websharper.ui.next">UI.Next</a> with Simon Fowler. While trying to implement a sub-FRP system, something that is not quite as general but hopefully much simpler to implement, I also realize I start slowly thinking about general FRP again.</div><div><br />
</div><div>Briefly, the compromises we are making in UI.Next are:</div><div><ol><li>No first-class event streams. You want to transform event occurrences? Do it with imperative state and callbacks. That stops working? Do it with agents and communicating micro-processes. That stops working too? Use Concurrent ML (or Hopac). We are not quite there yet with providing CML primitives in WebSharper/JS but this is planned, probably as a proxy for Hopac.</li>
<li>We do, however, provide a dataflow abstraction that is almost, but not quite, approaching FRP behaviors. A View<'T> is something that varies over time, in discrete steps. It is now computed from zero or more Var<'T>, which are a reactive form of ref cells.</li>
<li>Using the mental model of communicating processes, we designed a protocol for View<'T> processes to synchronize in a way that is friendly to GC for most programs, even without relying on weak pointers (not at this point, at least). The protocol also does not preserve occurrences, it synchronizes to the "latest" value, which is nice.</li>
<li>Views can be observed imperatively or by constructing reactive DOM documents.</li>
</ol></div><div>It is tempting to add Behaviors too, perhaps as View<time->'T>.</time-></div><div><br />
</div><div>As I am playing with this, I have one rebellious thought. Are FRP event streams or their transformers even worth the trouble? Behaviors are clearly wonderful. But do you really want first-class "mouseClicks"? Or a fold on those? In the context of an ML type system such as F#. </div><div><br />
</div><div>It remains to be seen. So far I like that our system is working well with embedding typical higher-order, stateful widget code, as found left and right in the JavaScript ecosystem. It is also simple enough so we could implement it.. Reasoning laws are NOT the ones for pure FP, but things are mostly tractable if you stick to the paradigm of communicating processes. CML might just fill the spot, but some cross-pollination might also happen. That is, we might end up adding process combinators that look a lot like FRP event and event/behavior combinators.</div><div><br />
</div>Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-39522906154208352922014-05-14T21:02:00.001-04:002014-05-14T21:03:25.925-04:00Dangers of react and VirtualDomThis is a brain dump from a Twitter/IRC discussion with @panesofglass and @bryanedds.<br />
<br />
So there is this brand-new thing called Facebook React and hype wave about Virtual DOM. I only skimmed this library, but here are some reasons why I remain sceptical:<br />
<br />
<ul><li>Identity! Typical DOM trees are a lot more than the markup. They are full of pieces of state that are *opaque* - abstract types, and *mutable*. This is especially true if you use 3rd-party libraries, but even with vanilla DOM, <c>input</c> tag, once rendered, interacts with the user and updates its focus state, the position of the cursor, and so on. Can all of it be captured in a snapshot as a pure value? Does React do this? I will be interested to find out.</li>
<li>If the answer is yes, I would be interested in playing more with VDom. Taking this as a foundation for design, means you throw out all other UI libraries (Ext JS, Kendo, etc). It is like a campaign to rewrite the Internet, without mutable state. But who cares - still might be interesting.</li>
<li>WebSharper Formlets library we designed with Joel Bjornson a while back assumes the answer is NO, so it does work with third-party libraries. The cost is some care is taken to manage identity of things. As far as change propagation through a tree on *pure* values, hey, we have done that in Formlets too, there is nothing too magical there. There are some nasty things when you do higher-order combinators, such as "Formlet.Many", and we try to do a sensible thing but I am not entirely confident in the theory. Loic Denuziere can comment on Piglets better than I can, but I think is quite similar as far as VDom aspect is concerned.</li>
<li>Formlets do not have or need a Diff function. You only need a Diff function if you describe transformations from a Model type to your UI type (VDom or what have you) as functions. If you have combinators with more structure, this can be done implicitly. Check the source.</li>
<li>Before, managing identity made me feel really bad. Am I a functional programming or what? These days, I am slowly accepting the concurrent paradigm. This is a different worldview, where process calculi (Pi, CSP) play a foundation similar to lambda calculus in FP. The paradigm makes working with identity essential, natural, and effective. Excellent fit for the world of UI!</li>
<li>There is also the Dataflow/FRP paradigm. I tried a few times, but my brain is still to small to digest it. Things get quite crazy when you allow dynamic switching combinators <c>Signal (Signal t) -> Signal t</c>. So, in this space I most sympathize with the Elm language which, from my limited understanding, seems to take a stand and prohibit such combinators, or in fact even possibility of constructing a type <c>Signal (Signal t)</c> in its type system. Looks like a great design, cleaner implementation and many useful things you can build. Some Haskellers though claim that the more general design is a "solved problem". Perhaps.</li>
<li><a href="http://www.umut-acar.org/self-adjusting-computation">Self-Adjusting Computation</a> was recently pointed out to me. I am still working through the paper. Looks highly relevant.</li>
<li>What would I like for Formlets/Piglets vNext, given time? Mostly, I would like a solid *model*, one that even I can understand and explain, that gives a meaning to what is going on, and is reasonably cleanly mapped to the implementation. For that, looks like I still need to catch up on some studying.</li>
</ul><br />
Chat logs:<br />
<br />
<a href="https://gist.github.com/t0yv0/37b19dbe2304e30c3adf">#websharper on Freenode</a><br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-7395017131429522542014-04-18T14:35:00.001-04:002014-04-18T14:35:42.358-04:00Does syntax matter? Or how I started progrmamming.How I started programming: well there was some Basic, Pascal and PHP, but I started getting serious only after encountering and trying to understand Haskell, while doing grad school for an unrelated subject. Haskell challenged me, somehow. It looked like nothing I have seen before. And playing with it had that feeling of figuring out a puzzle.<br />
<br />
How I found Haskell: going through the list of languages that Kate or some such editor supported highlighting for. Yeah, I know.. In alphabetic order.<br />
<br />
F# was not on the radar then, but OCaml was. I looked at it (among with Python, Ruby, you name it). I remember thinking a couple of things:<br />
<br />
1. Syntax looks so painful<br />
2. How do I use these command-line tools?? (it is better these days with ocalmbuild)<br />
3. It has objects - must be another PHP-like disaster of a language<br />
4. Given 1-3, no point in learning it<br />
<br />
Huge mistake. OCaml is a gem. But you can see how a beginner might miss the point. <br />
<br />
So for all good languages out there - I guess syntax does not matter that much, but great documentation does. If the syntax is strange, but the documentation explains why the language is worth learning nonetheless, a beginner might stick with it. <br />
<br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-32552131763603609642014-03-14T20:38:00.000-04:002014-03-14T20:38:12.009-04:00Concurrent ML and HOPACOne of these days I got a hold of a copy of <i>Concurrent Programming in ML</i> by John Reppy, an excellent book. One of the many virtues: it is the missing documentation piece for Concurrent ML (CML) system, explaining design motivations and giving helpful examples.<br />
<br />
I wish it was in open domain. Why? Simply to push CML design further, bring more attention to it. It is so much better than what is being commonly used. This is not to say that there is no space for other concurrency abstractions. But the CML has the advantage of offering a small set of features from which all other useful abstractions are easily built. I cannot do a better job than the book for advocating CML, but I particularly like:<br />
<br />
<ul> <li>Simple to understand semantics - programs in PI or other process calculi map very closely to CML code</li>
<li>Makes it easy to build other abstractions - asynchronous communications, locks, buffers, reactive systems, you name it</li>
<li>Selective communication seems VERY important</li>
<li>Plays well with ML-style languages (my favorite)</li>
<li>Admits moderately efficient implementation - should frequently be good-enough for production, and definitely excellent for prototyping</li>
</ul><br />
In a tweet I mentioned that having the book in open domain would "help fight actor/reactive nonesense" and was asked to elaborate. So here are some statements I came to disagree with:<br />
<br />
<ul> <li>Concurrent software should be written exclusively with Erlang-style asynchronous message-passing</li>
<li>It should be written with reactive systems of RX/IObservable flavor</li>
<li>It should be written in the FRP paradigm</li>
<li>F# has excellent support for concurrency, it provides `async` and "actors"..</li>
</ul><br />
Obviously, Erlang-style message passing, reactivity, and carefully designed FRP systems (such as Elm language, AFRP) are all good for <i>certain problems</i>. RX/IObservable systems, IMHO, are a net liability. In general, take any of these paradigms as the default, and you will quickly find programs that you want to write fit the paradigm no better than a saddle would fit a cow. I think CML stands out as a much better foundational choice, subsuming the others.<br />
<br />
Note to F# users - thanks to valiant effort by Vesa Karvonen, you can now use CML-style primitives in F#. See the <a href="https://github.com/VesaKarvonen/Hopac">Hopac</a> project - it needs a bit more love!<br />
<br />
Note to lovers of F# async. Async is a hack, but a good one. It is what you do when you want cheap by-the-million threads but do not have the time to rewrite the runtime (CLR). However, with a proper runtime, there is no need. See Racket or CML, where blocking syntax is used to orchestrate millions of lightweight threads.<br />
<br />
Note on deadlock: CML-style channels use sync write, but async write is easy to write too. The book argues why sync writes are a better default. Yes you can program deadlock, but you can do the same within any sufficiently expressive concurrency paradigm. You definitely can in Erlang, try it as homework.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com1tag:blogger.com,1999:blog-6028449519062514692.post-87019937233114338502013-10-15T14:09:00.000-04:002013-10-15T14:09:14.345-04:00Last word in .NET build systemsHas not been said yet, I do not think.<br />
<br />
In the F# world you may be looking at:<br />
<br />
<ul> <li>MSBuild</li>
<li><a href="https://github.com/fsharp/FAKE">FAKE</a></li>
<li><a href="http://bitbucket.org/IntelliFactory/build">IntelliFactory.Build</a></li>
</ul><br />
I am not going to do a detailed pro/con analysis of these just yet, but note that every one of them is currently missing abstractions relevant for this problem domain - building. A build system should allow you to do at least what veritable <code>Makefile</code> does - optimal rebuilds, but at an abstract level, as a library.<br />
<br />
The best system I have seen so far that gives you these abstractions is <a href="http://community.haskell.org/~ndm/shake/">Shake</a> by Neil Mitchell (coded in Haskell). It goes beyond Makefiles by allowing dynamic dependencies. I did not study the specifics very closely, but the overall design is vastly useful, brilliant.<br />
<br />
Do not have time at this moment to get it into the shape it deserves, but here is some work I have been drafting now and then to build a similar library in F#: <a href="http://github.com/intellifactory/fshake">fshake</a>. If this scratches an itch, let me know, I would be interested in contributors - I have not put the license in yet, but this project will be under Apache license.<br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-72551506449430889182013-10-04T23:03:00.000-04:002013-10-07T10:03:00.198-04:00WebSharper vs FunScriptGot asked to compare WebSharper to TypeScript on <a href="http://stackoverflow.com/questions/19178978/what-weaknesses-of-funscript-should-i-be-aware-of">StackOverflow</a>, doing it here instead. Disclaimer: I work for IntelliFactory and in fact develop WebSharper, so I am obviously biased.<br />
<br />
Good things to say about FunScript:<br />
<br />
<ul><li>seems to be entirely unrestricted (I could not even locate a license file)</li>
<li>has some very talented people hacking on it (but not as the day job)</li>
<li>has this wonderful idea of using TypeScript definition files to provide typed API for JavaScript libraries (however, at this moment it does not work with latest TypeScript version so no luck there)</li>
</ul><br />
Why would you be interested in WebSharper instead? The quick answer is that it actually gets used a lot more, and is known to work quite well on large projects (I will cite FPish and CloudSharper as the ones we have been using it on at IntelliFactory), there is a team working on it day-to-day, and you can get actual support. It has been in use longer and it might have more issues ironed out, and it definitely supports a larger portion of F# standard library out-of-the-box.<br />
<br />
NOTE: Tomas pointed me to a FunScript way of doing the following in the comments. <i>Jon Harrop sites problems with using <code>sqrt</code> in FunScript. In WebSharper, it works. Suppose it did not, and you know how the function looks in JavaScript, you do: </i><br />
<br />
<pre>[<Inline "Math.sqrt($x)">]
let sqrt (x: double) = sqrt x
</pre><br />
And there you have it.. Also, rest assured that there is no string splicing going on - the inline is parsed as JavaScript, then a large subset of JS like the above is lifted to our Core form, which assures things like evaluation order are preserved, and lets optimizations work smoothly, including ones we have not written yet :)<br />
<br />
That being said, I believe we have made quite a few mistakes in the past, including:<br />
<br />
<ul><li>hiding sources (it <b>is</b> open-source now)</li>
<li>mismanaging the community - actually I am reading books now on how to facilitate an open-source project community better</li>
<li>making it too hard to get started and not giving enough documentation<br />
(presently we are writing manual chapters at <a href="http://github.com/intellifactory/websharper">our github repo</a>) and preparing a website update</li>
<li>trying to provide all library bindings ourselves - TypeScript is definitely a great idea and we will add support for it (have a working prototype for 0.8 but need to upgrade to latest TS - same as FunScript</li>
<li>attempting to solve too many problems at once, and not focusing on important problems first. This leaving us with a large and somewhat difficult to change codebase</li>
<li>some engineering mistakes in organizing code or designing APIs</li>
</ul><br />
If you feel like making more suggestions, please do..<br />
<br />
People also typically bring up licensing and how the output code looks like as problems. I do not think those are show-stopper issues:<br />
<br />
<ul><li>License (AGPL) - it is actually free for open-source use. If you need to close your app, but cannot afford the listed license fees, just talk to us and we can work out a deal. Note that having this license might be annoying but it is essential for securing funding and commercial support for the project - for it to have a future.</li>
<li>Code output - we are working out a better optimizer with Andras Janko that will help a lot, in both shrinking output and improving performance, but really, how the code looks - this is not an issue. In months of programming with WebSharper I never remember looking at the JavaScript output. Look at Emscripten, can you read its output? Yet Emscripten is extremely useful. Once you get past a certain stability level, this does not matter anymore.</li>
</ul><br />
I also privately think both WebSharper and FunScript make this fundamental mistake - using F# quotations. I think the future is simply not there. There is exciting stuff going on with projects such as ASM.js and Emscripten, and I strongly suspect the future successful project in this area will provide a much more compatible CLR implementation on top of JS, plus perhaps some specialized optimizations for functional code as F# produces. I have seen a few CLR-to-JS compilers, but am not sure which is the best today. I tried writing one myself and hope to come back to it, it certainly was challenging and interesting. The key is to stop thinking of such projects as source-to-source compilers and start treating JS for what it is, portable assembly.<br />
<br />
<br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-74340659478586151522013-06-06T17:57:00.000-04:002013-06-06T17:57:07.008-04:00Generic programming in F# - another takePlaying a bit more with generics, I stumbled upon some fairly compact combinators that can derive n-ary conjunctions from a binary conjunction. Here is a self-explanatory F# example (disregard the ugliness involved in F#'s lack of higher kinds, Haskell or OCaml code would not need this):<br />
<br />
<script src="https://gist.github.com/toyvo/5725225.js"></script><br />
<br />
In case you are wondering about the Coq file in the gist: a good objection to this approach is efficiency. Instances computed in this way are not efficient at all. Just think of all the allocated tuples! It turns out, we can remedy the situation with automated program transformation. In F# it would involve computing over quotations or similar quoted forms. It is hard to get started since little support for optimizing those is available out of the box. In Coq there is more batteries. As you can see in the gist, Coq seems to tackle the above example trivially with the "compute" tactic, and derive an instance equivalent to a hand-written one.<br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-41662583477276280732013-04-03T21:35:00.000-04:002013-04-03T21:35:47.202-04:00Introducing FAKE boot workflow<a href="http://fsharp.github.com/FAKE/">FAKE</a> is a tool to automate your builds with F# scripts. Use the newly available FAKE boot workflow to get started quickly, automatically reference packages, and plug in your own build logic from NuGet.<br />
<br />
<a name='more'></a>Starting from <a href="http://nuget.org/packages/FAKE/2.1.173-alpha">FAKE 2.1.173-alpha</a> FAKE includes a new "boot" workflow. Let us walk through it. First, make a new folder (I am using PowerShell, but you can do this in Cygwin, or CMD.exe; Linux+Mono will work once my patch to the F# compiler is accepted):<br />
<br />
<pre>mkdir proj
cd proj
</pre>
<br />
Download <a href="http://nuget.org/">NuGet.exe</a> command-line somewhere and alias it:<br />
<br />
<pre>set-alias nuget path\to\nuget.exe</pre>
<br />
Install latest (pre-release) FAKE from NuGet:<br />
<br />
<pre>nuget install FAKE -pre
ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 4/3/2013 9:11 PM FAKE.2.1.173-alpha
</pre>
<br />
Good, we got the latest! Now, alias fake:<br />
<br />
<pre>set-alias fake FAKE.2.1.173-alpha\tools\FAKE.exe</pre>
<br />
Now we are ready to test FAKE boot workflow.<br />
<br />
<pre>fake boot init
Generated conf.fsx and build.fsx
./conf.fsx
</pre>
<br />
The <code>conf.fsx</code> script uses F# to specify the build dependencies of your project and fetch them from NuGet. Edit it, and add a dependency on the latest <code>Microsoft.AspNet.WebApi.SelfHost</code>. Dependencies will be installed transitively:<br />
<br />
<pre>module FB = Fake.Boot
FB.Prepare {
FB.Config.Default __SOURCE_DIRECTORY__ with
NuGetDependencies =
let ( ! ) x = FB.NuGetDependency.Create x
[
!"Microsoft.AspNet.WebApi.SelfHost"
]
}
</pre>
<br />
Save that. We can fetch the dependencies and configure the build:<br />
<br />
<pre>fake boot conf
ls packages
cat ./build/boot.fsx
</pre>
<br />
You see the NuGet packages have been installed and an F# file with the reference list was generated.<br />
<br />
Now let us edit the main `build.fsx` file that is going to execute with the dependencies.<br />
<br />
<pre>./build.fsx</pre>
<br />
Normally this would be your build logic, but just because we can, can script a little WebServer:<br />
<br />
<script src="https://gist.github.com/toyvo/a5814f6304f28a1cd8ff.js"></script><br />
<br />
Now, we can run it:<br />
<br />
<pre>fake boot
Serving http://localhost:8080
Press Enter to quit.
</pre>
<br />
Where to go from here: check out the various tasks that FAKE can do for you already. If something is missing, write it in F#, create a NuGet package, and publicize your work. Others will be able to get it in their FAKE scripts by simply referencing your package.<br />
<br />
I would like to see FAKE be able to do all the chores, including generating and managing Visual Studio and other IDE files, creating and publishing NuGet packages, initializing project/solution templates, upgrading references, generating documentation, generating boilerplate to make your project built "from scratch", anything you can think of. Rule of thumb: there is no good reason to write any logic in MSBuild or XML when you can write F#. It is portable, and makes it easy to avoid repeating yourself. If XML is needed for some tool to read, generate if from F#. I have wasted enough time with MSBuild and am not coming back.<br />
<br />Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-41170517188152751752013-03-29T17:21:00.001-04:002013-04-03T21:43:10.177-04:00FAKE with NuGet support<b>UPDATE</b>: the proposal made it into pre-release FAKE, see the more recent article: <a href="http://t0yv0.blogspot.com/2013/04/introducing-fake-boot-workflow.html">http://t0yv0.blogspot.com/2013/04/introducing-fake-boot-workflow.html</a><br />
<br />
Intended F# build workflow: start from an empty folder, write build.fsx, and run fake - and get anything building with software from NuGet.<br />
<br />
<a href="https://github.com/intellifactory/FAKE">Draft Implementation</a> <a href="https://github.com/fsharp/FAKE/issues/116">Discussion</a><br />
<br />
<br />
<a name='more'></a><br />
As we are using <a href="http://github.com/fsharp/FAKE">FAKE</a> for scripting builds a little more at IntelliFactory, I could not resist to make a little fork to reduce some of the burden. NuGet quickly is becoming the de-facto package manager for C#/F# assemblies, and we are even considering using a private NuGet feed inside the company for our own projects. It therefore makes sense that the default F# build tool would be intimately aware of and integrated with this package manager.<br />
<br />
There are some NuGet tasks in FAKE, this is not what I want. By <i>intimate awareness</i> I mean, for example, the ability easily execute code from the NuGet repository inside your script. Easily - without manually tracking dependencies or figuring out correct reference paths.<br />
<br />
In my fork I accomplish this by using NuGet in-process API (NuGet.Core) to do the analysis and generate an F# script file with all required references. Your script executes in two stages - first with the BOOT constant set and minimum references, and then normally. In the BOOT stage you get a chance to do some preparatory work such as fetching dependencies and generating the reference list. And in the second stage you can enjoy working with the dependencies you fetched directly.<br />
<br />
An example tells it best - if you run fake with our changes in a folder with this script, you will get a working website automatically:<br />
<br />
<script src="https://gist.github.com/toyvo/8f1b541a051830b36a18.js"></script>Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-32570473545080404802013-03-27T22:27:00.000-04:002013-03-27T22:27:02.766-04:00Generalizing records combinators a bitTowards generic programming in F#: thoughts on generalizing the earlier <a href="http://t0yv0.blogspot.com/2012/10/combinators-over-records-and-unions.html">combinators over records and unions</a>...<br />
<br />
<a name='more'></a><br />
<br />
I wrote earlier a little post on <a href="http://t0yv0.blogspot.com/2012/10/combinators-over-records-and-unions.html">combinators over records and unions</a>. This approach in F# is still very attractive to me for defining various converters and serializers because it (1) uses no reflection and thus works in JavaScript via WebSharper out of the box; (2) gives the programmer full control to drop out of combinators anywhere where specific (more optimal, or custom) behavior is needed; (3) leans toward generic programming which has the potential to reduce code.<br />
<br />
However, the approach, as presented, was overly specific, so point (3) remains on the TODO list. Code savings are only possible if you have N definitions that prove that datatypes are generic, and M definitions that prove that traits are generic, and from these you are able to derive every trait for every datatype. That is, the manual approach takes M * N definitions, and the GP approach takes M + N definitions. To make it possible, I need to find a way to generalize the combinators presented earlier.<br />
<br />
If you look at <a href="http://okmij.org/ftp/ML/first-class-modules/">Oleg's site</a>, there is a nice article on first-class OCaml modules in relation to Generic Programming (GP). Of course, F# is incapable of any such thing. So GP proper seems out of reach.<br />
<br />
One idea I had was to try modelling GP proper in Coq and extract to F#, but I did not get very far yet.<br />
<br />
Another idea was to restrict the shape of the generic trait - this is obviously weaker than full-blown GP, but can still be useful. So instead of `g a` with abstracting over `g`, we fix `g` to be of a certain shape. For serializers we want this shape to have both positive and negative occurrence of `a`.<br />
<br />
I got some success prototyping this in Haskell - the idea is to model in a purely functional setting (with monads) to make sure there model holds up, and then port back to F# functions, where, say, `a -> b` is understood as `a -> IO b`, and is itself a monad if we fix `a`: `type M x = a -> IO x`.<br />
<br />
Here is the gist of it. I have a hunch that `Monad` is too strong when modeling the "reader" part and there should be a way to get by with `Applicative` only.<br />
<br />
<script src="https://gist.github.com/toyvo/dc3732ac36d32ba28749.js"></script><br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-51211937104979150062013-03-26T21:11:00.000-04:002013-03-26T21:12:40.483-04:00WebSharper, PhoneGap, and Ripple: easier native HTML5 appsWe are experimenting with <a href="http://phonegap.com/">PhoneGap</a>, PhoneGap Build, <a href="http://ripple.incubator.apache.org/">Ripple emulator</a> and <a href="http://websharper.com/">WebSharper</a>. If successful, this will let you write truly cross-platform native mobile apps in F#, quickly pre-testing them in Chrome, and then generating various installers (Android, Windows, iOS - yes, iOS!) without even having to install the SDK.<br />
<br />
<br />
<a name='more'></a><br />
<br />
Current mobile HTML 5 application story is this: either you create a mobile-friendly website and have users navigate to it, or you wrap your HTML 5 app into a native app - the second choice is what this article is all about. The reason to wrap can be (a) you would like to get it into the native store; (b) you would like to use some native APIs (geolocation, accelerometer, contacts DB, etc).<br />
<br />
Up to now WebSharper came with a little mobile library we made that gave some API working on both Android and Windows Phone. PhoneGap incorporates Apache Cordova library that seems to be solving the same problem, has more APIs, supports iOS, and has a lot more users. It therefore makes sense, strategically, to let WebSharper use PhoneGap instead of maintaining our own layer.<br />
<br />
Another good service under the same name is PhoneGap Build. Maintaining SDKs to wrap a little HTML5 app into iOS, Android and Windows native apps can be a huge burden. With PhoneGap Build, you just create a zip or a GitHub repository and ask the cloud service to build multiple native app packages for you. Clean and simple.<br />
<br />
Unfortunately, there is still the problem of testing. When you are just starting out, you are probably not prepared to own ALL devices you are targeting. Even if you did, testing on all devices is too slow and painful, especially when debugging simple errors that have nothing to do with mobile devices, and would, in fact, show up in a normal browser. Here the Ripple emulator seems to be the way to go - with a little Chrome extension, you can pretend-test your application in your browser. It even draws different mobile device contours to let you see how your app works in different resolutions.<br />
<br />
All these tools seem to be wonderful, but in practice using them currently is still quite painful. We are trying to adapt WebSharper to make the story simpler. As an intermediate step, we released <a href="http://intellifactory.github.com/TypedPhoneGap/">TypedPhoneGap</a> - TypeScript wrapper around PhoneGap API that is structured in a modular, typed fashion with feature detection where possible - no more obtuse string constants, undetected "undefined" problems, or string event names. We now are generating and testing WebSharper bindings from this definition. If all goes well, sample applications should be coming shortly.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-25520980651606583062013-03-25T21:44:00.000-04:002013-03-25T21:44:15.505-04:00TypeScript: initial impressionsThere are several things under the <a href="http://typescriptlang.org/">TypeScript</a> umbrella: a type system, language spec, JavaScript-targeting compiler and tooling that provides interactive code completion. I have tried TypeScript on a few small projects, the latest one being <a href="http://bitbucket.org/IntelliFactory/typedphonegap">TypedPhoneGap</a>. I also worked extensively with the language spec, wrote a parser and analyzer for the `.d.ts` contract specification fragment, pondered the semantics, and engaged in little flamewars on the language forum. My conclusion, in short, is this: the type system and language design are terrible, the compiler is OK, and tooling is excellent. Overall, it is an improvement over writing bare JavaScript, and the definition fragment is helpful (though not ideal) for communicating JavaScript API formally.<br />
<br />
<a name='more'></a><br />
<br />
It is easy to see why the tooling is good - you can check out for yourself in the interactive environment on the TypeScript website. I used it through VisualStudio and though I found it glitchy in a few places, with having to restart VS a dozen times, it is overall quite helpful.<br />
<br />
My complaints are mostly on what I consider to be poor language design decisions.<br />
<br />
First, the type system does not even try to be sound:<br />
<br />
<script src="https://gist.github.com/toyvo/5242309.js"></script><br />
<br />
Second, there are no generics (though they are promised for the next release). It feels very dumb having to replicate code that could have been expressed as generic specialization. Code duplication can be avoided if you drop to JavaScript and start computing things, but this only applies to values and not types. There does not seem to be an easy way to compute TypeScript types as-you-go.<br />
<br />
Third, I find the whole subtyping story simply a waste of time. OCaml has a much better system with row polymorphism, and still I do not get the impression it is used much. Would be interesting to see if it is used in js_of_ocaml to reason about JavaScript libraries. I have not felt a need for this using WebSharper.<br />
<br />
Fourth, there is no abstraction. I have not found a way to define an abstract type. Interfaces are always open. Classes are open too, and I have not found how to make all constructors private. This is really annoying.<br />
<br />
Fifth, there is a lot of recursion craziness going on in the language, complicated with structural typing. I think the types are best described by regular trees. This is not exactly implementation-friendly - I had a very hard time trying to implement things like a subtyping decision procedure, especially since the language spec is very terse on these issues. The online TS compiler works for quite arcane examples - I wonder if they have a correct equality and subsumption implementation over regular trees, or they are using some hack that just happened to work on my examples. Not that it matters very much since subtyping as defined breaks soundness and is therefore not very interesting or useful.<br />
<br />
Now, the biggest promise I see in TypeScript is to give a formalizm for expressing JavaScript contracts, especially since there is some <a href="http://github.com/borisyankov/DefinitelyTyped">adoption</a>. This essentially involves the <code>.d.ts</code> fragment: constructs for expressing JavaScript interfaces. Unfortunately, as it stands, it is quite limited because of the lack of generics and abstraction.<br />
<br />
Also, from our experience with WebSharper, artifacts expressing JavaScript contracts should not be authored by hand. Usually there is a lot of repetition going on in JavaScript library APIs, something that can be easily abstracted over if you are writing in a Turing-complete programming language (`.d.ts` files are not).<br />
<br />
I therefore envision an ideal toolset for JavaScript contract specification to consist of:<br />
<br />
<ul><li>A machine-readable simple (JSON-based?) standard spec for API contracts</li>
<li>A JavaScript library for generating the API contracts in some EDSL allowing to take advantage of abstraction</li>
<li>Tooling for generating easy to use API documentation from the contracts</li>
<li>A utility for sealing a value with a contract to introduce runtime checks and blame assignment (there is some good literature on how to do that)</li>
<li>Tooling for generating `.d.ts` from contracts, parsing `.d.ts` into contracts (possibly with some approximation)</li>
<li>Additional tooling for consuming contracts from JavaScript-based frameworks such as WebSharper</li>
</ul><br />
The problem, as always, is marketing - convincing <i>library authors</i> to use a formalism to express their API. Even if we had a working system satisfying the above wishlist, this would be a tough one. <br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-37923462435622131922013-03-23T11:53:00.001-04:002014-01-13T19:01:48.561-05:00Using Coq as a program optimization tool<b>Update:</b> - a slightly more compelling example of where program computation shines is at <a href="http://t0yv0.blogspot.com/2013/06/generic-programming-in-f-another-take.html">Another take on F# generics</a>.<br />
<br />
I tend to complain a lot about the programming tools I work with. In one sentence, the world of software is just not a sane place. If left unchecked, this complaining attitude can grow into despair - so I enjoy a little escape from time to time. I need to dream of a better world, where programming is meaningful, fun, beautiful, efficient, mostly automated, and where programs are verified. This is why my escape activity is playing with <a href="http://coq.inria.fr/">Coq</a>.<br />
<br />
One fascinating trick that Coq does really well is computing and manipulating programs. Have you ever found yourself writing beautiful, general functional code and then wondering how to make it run faster? Spending lots of time inlining definitions, special-casing, performing simplifications and hoping you do not introduce bugs in the process?<br />
<br />
This activity can be automated in a way that verifies meaning preservation.<br />
<br />
Here is a rather silly example (I will try to think of a better and more convincing one):<br />
<br />
<script src="https://gist.github.com/t0yv0/5228123.js"></script><br />
<br />
Lines 20-24 should in theory be expressible as a single-word tactic. This is where program simplification happens.<br />
<br />
Imagine a better world.. You write functional code, focus on clarity of specification, then drop into interactive theorem proving to compute (!) the equivalent huge, ugly, optimized, specialized program that runs a lot faster but is guaranteed to produce the same result.<br />
<br />
<b>EDIT</b>: a reader has asked how manipulating programs in Coq is different from using an optimizing compiler such as GHC. The short answer (and I again realize just how lame my example is since it does not demonstrate it) is <i>control</i>.<br />
<br />
With GHC, you get a black-box optimizer that does some transformations and gives you a result. The resulting program is usually better (faster, uses less space), but once in a while worse. It also may or may not do the essential optimization for your domain. I am not using GHC on a daily basis - so feel free to correct me, but I believe the amount of control you exert is fairly limited, even with REWRITE pragmas. In Coq, on the other hand, you get "proof mode" - an interactive environment where you can inspect and explore the intermediate versions of your program as it is being optimized, and guide the process. There is a spectrum of possibilities from fully manual to fully automated transformations, programmable in the Ltac tactic language.<br />
<br />
Another advantage (which makes this attractive to me) is that if you are targeting ML (OCaml, F#), you are working with a compiler that refuses to do too much optimization. So GHC-like magic is simply not available. Here extracting from Coq may come handy.<br />
<br />
Yet another advantage is that you are working in a more expressive logic. This opens the door to writing more general programs than your target compiler would admit. I have not yet explored this too much, but it seems to be an easy way to bypass the lack of, say, higher kind polymorphism in F#.<br />
<br />
Finally, I should also mention that as a proof assistant, Coq makes it practical to construct proofs that your programs (and transformations) are correct with respect to a given semantics. I find this appealing, even though this is a bit beside the point. As a beginner my experience has been that proving any non-trivial lemma takes unbelievably more time than constructing a program, so for now I am just exploring the possibility of using Coq as a programming language without doing much proof development.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-34595165893395778872013-03-19T22:25:00.002-04:002013-03-19T22:25:57.260-04:00Automate, automate, automate..Latest snapshot of the WebSharper repositories (<a href="http://bitbucket.org/IntelliFactory/websharper">Bitbucket</a> and at <a href="http://github.com/intellifactory/websharper">GitHub</a>) showcases some build automation I finally managed to get working. It starts from MSBuild that uses NuGet to pull dependencies, including build dependencies, and then jumps to FAKE - using the alpha pre-release FAKE version has solved some F# version problems I faced earlier. It builds a lot of code, creates an output NuGet package, and also creates several Visual Studio templates that it packages up into a VSIX extensibility package - using the free to use API we are releasing under <a href="http://bitbucket.org/IntelliFactory/build">IntelliFactory.Build</a>.<br />
<br />
Things are looking a bit better since the build succeeds from scratch inside the AppHarbor environment that is currently hooked up - as a simple build server.<br />
<br />
Despite the modest progress, build automation is still a nightmare. I think we need a much simpler story, one that would definitely involve the NuGet repository as a global binaries library and F# as the language for all build logic, with MSBuild in support role (generating MSBuild scripts for Visual Studio users).<br />
<br />
See also the <a href="http://fpish.net/topic/Some/0/76259">discussion</a> on ScriptCS - this project attempts to do something very similar based on C#. I like their authors' emphasis on Node Package Manager as the model.<br />
<br />
A thought that occurred to me frequently when debugging the builds was the simple question - why do all the parts have to work with process isolation barriers? It seems like we are trying to play the UNIX game on Windows, where the filesystem simply does not keep up, and processes have an unbearably slow cold start. For example, in principle, why invoke NuGet or MSBuild for that matter as a process through the command line, when there is .NET API for both? It seems that PowerShell at least allows Cmdlet's to keep some state in-memory, so reusing the same Cmdlet is vastly faster than invoking the same functionality in a separate process. I might be working on the tool along these lines as time permits.<br />
<br />
A big win would be to have sane F# compiler (without the famous toxic nuclear waste that Haskell avoids by controlling side effects in the type system).. Then we would not have to invoke it in a separate process for each project in the solution. This could drastically reduce build times.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0Harrisonburg, VA, USA38.4495688 -78.86891550000001438.3500713 -79.030277000000012 38.5490663 -78.707554000000016tag:blogger.com,1999:blog-6028449519062514692.post-86772562349541669702013-03-14T16:36:00.000-04:002013-03-20T08:59:58.112-04:00 Multi-targeting .NET projects with F# and FAKEI am currently working on simplifying build configurations for a bunch of projects including <a href="http://bitbucket.org/IntelliFactory/websharper">WebSharper</a>, <a href="http://bitbucket.org/IntelliFactory/build">IntelliFactory.Build</a>, <a href="http://bitbucket.org/IntelliFactory/fastinvoke">IntelliFactory.FastInvoke</a> and eventually multiple WebSharper extensions. I am now using F# and <a href="https://github.com/fsharp/FAKE">FAKE</a> when possible instead of MSBuild, and relying more heavily on the public NuGet repository. <br />
<br />
The good news is that abstracting things in F# and sharing common build logic in a library via NuGet really works well, and feels a lot more natural than MSBuild. Consider this <c>Build.fsx</c> file from <c>FastInvoke</c>:<br />
<br />
<script src="https://gist.github.com/toyvo/3ece4b8eb0adf3472038.js"></script><br />
<br />
The FAKE-based build (well, together with some MSBuild boilerplate that I generate) accomplishes quite a few chores:<br />
<br />
<ul>
<li>Bootstraps <c>NuGet.exe</c> without any binaries in the source repo</li>
<li>Resolves packages specified in solution <c>packages.config</c>, including pulling in <c>FAKE</c> and build logic such as <c>IntelliFactory.Build</c></li>
<li>Determines current Mercurial hash or tag</li>
<li>Constructs AutoAssemblyInfo.fs with company metadata and mercurial tag</li>
<li>Constructs MSBuild boilerplate to help projects find NuGet-installed dependencies without specifying the version, for easy dependency version updates</li>
<li>Builds specified projects in multiple framework configurations</li>
</ul>
<br />
And you can, of course, do more inside the FAKE file.<br />
<br />
The bad news is that I expected quite a bit more from FAKE, and I end up fighting it more than using it - note that these can be either legit problems with FAKE or else my limited understanding of it.<br />
<br />
<ul>
<li><strike>Running <c>FAKE.exe</c> drops your code into the 2.0 runtime, even if the host process was in 4.0 - not acceptable for me, had to replace <c>FAKE.exe</c> invocation with <c>FSI.exe</c> invocation</strike> - UPDATE: when using pre-release alpha version of FAKE, the scripts default to the 4.0 runtime and use FSharp.Core 4.3.0.0 - problem solved</li>
<li><c>FAKE</c> had no easy support for dependency tracking, such as not overwriting a file unless necessary (useful to prevent say the MSBuild it invokes from doing work twice) - had to roll a few or my own helpers</li>
<li>Surprisingly <c>FAKE</c> MSBuild helpers are calling MSBuild process instead of using the in-process MSBuild API. By using MSBuild API myself, I am able to speed things up a bit.</li>
<li>On a similar note, I toyed with invoking either <c>FAKE</c> or <c>Fsi.exe</c> in a slave AppDomain (should be faster than a separate process, right?) from the host MSBuild process that my build starts with. The approach failed miserably. <c>Fsi.exe</c> is reading <c>System.Environment.GetCommandLineArgs()</c> instead of reading the <c>EntryPoint</c> args, so that it does not see the args I pass to the slave AppDomain, but instead sees the args that MSBuild receives. And <c>FAKE</c>, again, drops me into the 2.0 runtime, probably starting another system process too.</li>
</ul>
<br />
In the end my impression is that there is tremendous value in automating build logic in F# instead of MSBuild. As to <c>FAKE</c>, it seems most of its value comes from the helper library - some shared direct-style imperative recipes. I would like to see that released separately, say as <c>FAKE.Lib</c> NuGet package, to make it easier to use standalone. Also, it seems that <c>FAKE</c> could really benefit from some extra standard features for dependency management such as comparing target input and output files by checksum or date, to be on par with <c>rake</c>.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-72471856425889464772012-10-10T17:43:00.000-04:002012-10-10T17:46:42.689-04:00Combinators over Records and UnionsIn the <a href="http://t0yv0.blogspot.com/2012/10/combinators-over-discrimated-unions-in.html">previous post</a>, I discussed designing combinator libraries that compose some property over unions. It is only fitting to throw records in the mix.<br />
<br />
<pre>type U =
| A of int
| B of float
| C of string
let UFormat =
(
UnionCase A IntFormat <<
UnionCase B FloatFormat <<
UnionCase C StringFormat
)
|> Union (fun a b c x ->
match x with
| A x -> a x
| B x -> b x
| C x -> c x)
type R =
{
A : int
B : float
C : string
}
let RFormat : Format<R> =
(
RecordField (fun r -> r.A) IntFormat <<
RecordField (fun r -> r.B) FloatFormat <<
RecordField (fun r -> r.C) StringFormat
)
|> Record (fun a b c -> { A = a; B = b; C = c })
</pre><br />
With some simplifications, here is the code:<br />
<br />
<script src="https://gist.github.com/3868630.js"> </script>Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com1tag:blogger.com,1999:blog-6028449519062514692.post-16855763993778603232012-10-08T11:04:00.000-04:002012-10-08T11:04:38.264-04:00Combinators over Discrimated Unions in MLDiscriminated unions or sum types are a natural way to model logical OR. Often you have a property that distributes over OR. Say, in F# (used throughout the article, though the ideas should apply equally well to any ML), you can write a combinator of the type:<br />
<br />
<pre>P<'T1> → P<'T2> → P<Choice<'T1,'T2>></pre><br />
How to go from here to a nice set of combinators that would handle an arbitrary union? This question has been on my mind for a while, and finally I have an acceptable solution.<br />
<br />
As a disclaimer, at the level of theory the question is completely trivial. The interest is in how to find a user-friendly ML interface. Even this, I suspect, has been solved before, and at a much more general level. I am aware for instance of Vesa Karvonen's <i>Generics for the working ML'er</i>, and a more recent Oleg Kiselyov's <a href="http://okmij.org/ftp/ML/first-class-modules/generics.ml">presentation</a> of generics in OCaml. I am looking here at a much simpler setting, hopefully taking it to a more accessible, for-dummies-like-myself level.<br />
<br />
Suppose you are designing binary format combinators to let users of your library construct values of the form:<br />
<br />
<pre>type Format<'T> =
{
Read : BinaryReader → 'T
Write : BinaryWriter → 'T → unit
}
</pre><br />
Given an arbitrary union type and some primitive Format values, the user should be able to compose them. This involves projecting from sub-types to the parent union type, and matching backwards. After some experimentation, the design I have looks like this:<br />
<br />
<pre>type U =
| A of int
| B of float
| C of string
let UFormat =
Union (fun a b c x →
match x with
| A x → a x
| B x → b x
| C x → c x)
<< Case A IntFormat
<< Case B FloatFormat
<< Case C StringFormat
<| End
</pre>That's pretty much it. The magic is in the type of Case combinator that accumulates types so that the Union combinator can present N-way pattern matching as a single reasonably convenient to write function. Full code: <br />
<script src="https://gist.github.com/3852920.js"> </script>Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com1tag:blogger.com,1999:blog-6028449519062514692.post-68520676840775038332012-09-14T12:57:00.001-04:002012-10-08T11:06:36.759-04:00Faster Printf Released on NuGet<b>EDIT:</b> The project described in this article is in alpha stage. If you are interested in a more mature an faster drop-in replacement for F# Printf module, check out Arseny Kapoulkine's <a href="http://hg.zeuxcg.org/fastprintf">fastprintf</a>.<br />
<br />
There is news on the faster F# Printf.* story. We released an alternative implementation as a package today.<br />
<br />
NuGet: <a href="http://nuget.org/packages/IntelliFactory.Printf">IntelliFactory.Printf</a><br />
<br />
Source: <a href="https://bitbucket.org/IntelliFactory/printf">https://bitbucket.org/IntelliFactory/printf</a><br />
<br />
Some background: F# Printf.* functions are a very nice interface for formatted printing, but the default implementation is quite slow, which is undesirable for production use (say, for logging inside a server). I believe the F# team is addressing this for their next release - and when it comes, it is going to be awesome. In the meanwhile, we offer a drop-in replacement library that can speed things up a bit with the existing F#.<br />
<br />
The code does not go the full 100% way of what is possible to optimize, in particular it does not use Reflection.Emit to attempt constructing an efficient closure capable of bypassing some of the closure allocation overhead via FastInvoke. However, I believe the improved performance already is practical, and the part of the implementation dealing with closure construction is pleasantly clear and easy to understand this way. <br />
<br />
If you decide to try it out, note that this is an alpha release as it does not yet support format modifier flags and %A yet (I intend to rectify this as time permits).<br />
<br />
<b>ACKNOWLEDGEMENTS:</b> I would like to thank Vladimir Matveev from the Microsoft F# team for some fruitful discussions on the internals of F# Printf.*; This project is largely based on my best attempt to beat Vladimir's code on a set of micro-benchmarks. In the end I lost the race, as Vladimir has come up with an even faster implementation since then. I am still quite happy though, knowing that with any luck Vladimir's code will make it to the F# trunk, and all will benefit.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-82424230805797088852012-07-19T10:59:00.001-04:002012-07-19T10:59:51.696-04:00Speeding up F# Printf.*There recently was an interesting SO question on F# Printf.* family of functions:<br />
<br />
<a href="http://stackoverflow.com/questions/11559440/how-to-manage-debug-printing-in-f">http://stackoverflow.com/questions/11559440/how-to-manage-debug-printing-in-f</a> <br />
<br />
It is known that these functions are very slow. Slow enough for most people to avoid them entirely, despite the advantages they offer in verifying argument types.<br />
<br />
What I did not know is that these functions are so slow that a few lines of simple user code can speed them up, without changing the interface:<br />
<br />
<script src="https://gist.github.com/3144484.js?file=Sprintf.fs">
</script><br />
<br />
With this code I get from 2x to 10x speedups. It is suspicous, I am afraid F# does not cache parsing of the format strings. There is no reason why it should not.<br />
<br />
Exercise for the reader: fix the code above to be thread-safe.<br />Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-8513131884781357332012-06-14T11:21:00.001-04:002012-06-14T11:21:42.030-04:00AppHarbor: Free Cloud Hosting of WebSharper AppsWe have just released a new version (2.4.85) of <a href="http://websharper.com/">WebSharper</a>, our web development framework and F#-to-JavaScript compiler. The main highlight of this release is experimental support for easy cloud deployment of your applications with <a href="http://appharbor.com/">AppHarbor</a>. Small AppHarbor deployments are currently free, which is great news for individual developers and small companies.<br />
<br />
How to get it to work:<br />
<br />
<ol>
<li>Set up a GitHub or Bitbucket repository</li>
<li>Set up an AppHarbor account</li>
<li>Connect the two, according to AppHarbor instructions, so that pushing to your repository notifies AppHarbor to pull, build and deploy your project on their cloud</li>
<li>Install WebSharper using the MSI installer</li>
<li>Install <a href="http://nuget.org/">NuGet</a> through VisualStudio extensions manager</li>
<li>Create a new WebSharper solution</li>
<li>Using NuGet package manager, add a dependency on the "WebSharper" NuGet package from NuGet gallery to all projects in your solution</li>
<li>Enable NuGet package restore (alternatively, use our own tool <a href="https://bitbucket.org/IntelliFactory/buildmagic">BuildMagic</a>)</li>
<li>Push to your repository, and see it working!</li>
</ol>
<br />
<b>NOTES:</b> you may commit the whole contents of NuGet-generated <span style="font-family: 'Courier New', Courier, monospace;">packages</span> folder with binaries, which guarantees successful builds on AppHarbor servers but is not recommended as it adds unnecessary bloat. It is better to not commit <span style="font-family: 'Courier New', Courier, monospace;">packages</span> and rely on NuGet package restore (or BuildMagic) to download the packages during build.<br />
<br />
<b>IMPORTANT:</b> Due to a technicality we have not yet overcome, you do have to commit all <span style="font-family: 'Courier New', Courier, monospace;">*.targets</span><span style="font-family: inherit;"> from the WebSharper package in the </span><span style="font-family: 'Courier New', Courier, monospace;">packages</span><span style="font-family: inherit;"> folder. You do not have to commit any WebSharper <i>binaries</i> if you use NuGet package restore or BuildMagic.</span><br />
<br />
We are also started releasing WebSharper extensions via NuGet. This should provide a convenient way for you to install and update extensions in your projects.<br />
<br />
Please let us know how these works for you. As always, your feedback, bug reports and suggestions are welcome at our <a href="http://bitbucket.org/IntelliFactory/websharper/issues">issue tracker</a>.Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0Harrisonburg, VA, USA38.4495688 -78.868915538.3998273 -78.9478795 38.499310300000005 -78.7899515tag:blogger.com,1999:blog-6028449519062514692.post-85255362239405907662012-05-25T20:23:00.000-04:002012-05-25T20:23:20.727-04:00NuGet BuildMagic: No Binaries in your DVCSLet me introduce BuildMagic: <a href="https://bitbucket.org/IntelliFactory/buildmagic">https://bitbucket.org/IntelliFactory/buildmagic</a> - get 0.0.1 via NuGet<br />
<br />
Have you used NuGet? If not, you probably should - with aggressive backing from Microsoft, it's quickly converging to be the default package manager and binary repository for .NET, other similar projects now stand no chance (though they often have more technical merit).<br />
<br />
Have you used NuGet package restore? You probably should - pushing binaries into source control is wicked. Especially if you are using DVCS - every fresh pull will have to get all history of your binaries. Especially if you are using Bitbucket which is quite slow on binaries.<br />
<br />
Now, have you been disappointed by NuGet package restore requiring you to commit NuGet.exe binary to source control? If so, check out BuildMagic. It is a little workaround for the issue. Unfortunately you still have to commit something redundant (a targets file). But at least now you can say a definite NO to binaries.<br />
<br />Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0tag:blogger.com,1999:blog-6028449519062514692.post-31453499428826517402012-04-28T16:57:00.000-04:002012-09-14T13:44:11.988-04:00A Prettier Printer in MLTo try out something different, I took a shot at writing a pretty-printing module. Of all the papers I found on the subject, perhaps the most accessible is <a href="http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CFMQFjAA&url=http%3A%2F%2Fhomepages.inf.ed.ac.uk%2Fwadler%2Fpapers%2Fprettier%2Fprettier.pdf&ei=AlWcT7ONCKHL0QGo4dSpDw&usg=AFQjCNEjnfP4E_tU3XUW_HWMvg8OPkrX0g">A prettier printer</a> by Wadler. In particular the algebra of documents presented in the paper is very simple and therefore compelling.<br />
<br />
I tried to implement these combinators in a OCaml. Unfortunately for OCaml, Wadler's code heavily relies on laziness for searching the exponentially large tree of possible printouts and selecting the optimal one by limited look-ahead and backtracking. The exact same behavior could be obtained in OCaml by mechanically injecting explicit "lazy" everywhere, but it will not be very pretty. I decided to play with a different approach and pre-compute enough information to make every decision on the spot.<br />
<br />
The result is <a href="http://github.com/toyvo/ocaml-pretty">ocaml-pretty</a> - see the link for the code and API docs. I tried to maintain the same behavior as in the paper, but have only checked consistency on some small examples. One obvious operational difference is memory use: since the document data structure is strict, pretty-printing cannot help but use O(N) memory in the size of the document. The published Haskell algorithm certainly does better. I did not find an obvious simple fix for this within the approach I adopted, and ended up deciding it does not matter for my purposes.<br />
<br />
OCaml standard library has a Format module - this is what you should use for real-world OCaml pretty-printing. Its imperative interface avoids both the O(N) memory overhead and the lazy data structure allocation overhead, and it's always there for OCaml.<br />
<br />
<b>UPDATE:</b> an adaptation of Wadler algorithm to strict languages, in particular OCaml, has already been published in the paper <i>Strictly Pretty</i> by Christian Lindig and Gartner Datensysteme Gbr, see <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2200">10.1.1.34.2200</a>, as was kindly pointed out to me by its author. The critical insight is managing Group nodes explicitly to avoid the exponential search space.<br />
Anonymoushttp://www.blogger.com/profile/08313802559573057206noreply@blogger.com0