We sat down with Daniel to learn more about his research and how the Wolfram Language plays a part in it.
This was actually a perfect choice in my research area, and the timing was perfect, since within one week after I joined the group, there was the first gravitational wave detection by LIGO, and things got very exciting from there.
I was very fortunate to work in the most exciting fields of astronomy as well as computer science. At the [NCSA] Gravity Group, I had complete freedom to work on any project that I wanted, and funding to avoid any teaching duties, and a lot of support and guidance from my advisors and mentors who are experts in astrophysics and supercomputing. Also, NCSA was an ideal environment for interdisciplinary research.
Initially, my research was focused on developing gravitational waveform models using postNewtonian methods, calibrated with massively parallel numerical relativity simulations using the Einstein Toolkit on the Blue Waters petascale supercomputer.
These waveform models are used to generate templates that are required for the existing matchedfiltering method (a templatematching method) to detect signals in the data from LIGO and estimate their properties.
However, these templatematching methods are slow and extremely computationally expensive, and not scalable to all types of signals. Furthermore, they are not optimal for the complex nonGaussian noise background in the LIGO detectors. This meant a new approach was necessary to solve these issues.
My article was featured in the special issue commemorating the Nobel Prize in 2017.
Even though peer review is done for free by referees in the scientific community and the expenses to host online articles are negligible, most highprofile journals today are behind expensive paywalls and charge thousands of dollars for publication. However, Physics Letters B is completely open access to everyone in the world for free and has no publication charges for the authors. I believe all journals should follow this example to maximize scientific progress by promoting open science.
This was the main reason why we chose Physics Letters B as the very first journal where we submitted this article.
I think the attendees and judges found this very impressive, since it was connecting highperformance parallel numerical simulations with artificial intelligence methods based on deep learning to enable realtime analysis of big data from LIGO for gravitational wave and multimessenger astrophysics. Basically, this research is at the interface of all these exciting topics receiving a lot of hype recently.
I was always interested in artificial intelligence since my childhood, but I had no background in deep learning or even machine learning until November 2016, when I attended the Supercomputing Conference (SC16).
There was a lot of hype about deep learning at this conference, especially a lot of demos and workshops by NVIDIA, which got me excited to try out these techniques for my research. This was also right after the new neural network functionality was released in Version 11 of the Wolfram Language. I already had the training data of gravitational wave signals from my research with the NCSA Gravity Group, as mentioned before. So all these came together, and this was a perfect time to try out applying deep learning to tackle the problem of gravitational wave analysis.
Since I had no background in this field, I started out by taking an online course by Geoffrey Hinton on Coursera and CS231 at Stanford, and quickly read through the Deep Learning book by Bengio [Courville and Goodfellow], all in about a week.
Then it took only a couple of days to get used to the neural net framework in the Wolfram Language by reading the documentation. I decided to give time series inputs directly into 1D convolutional neural networks instead of images (spectrograms). Amazingly, the very first convolutional network I tried performed better than expected for gravitational wave analysis, which was very encouraging.
Here are some advantages of using deep learning over matched filtering:
1) Speed: The analysis can be carried out within milliseconds using deep learning (with minimal computational resources), which will help in finding the electromagnetic counterpart using telescopes faster. Enabling rapid followup observations can lead to new physical insights.
2) Covering more parameters: Only a small subset of the full parameter space of signals can be searched for using matched filtering (template matching), since the computational cost explodes exponentially with the number of parameters. Deep learning is highly scalable and requires only a onetime training process, so the highdimensional parameter space can be covered.
3) Generalization to new sources: The article shows that signals from new classes of sources beyond the training data, such as spin precessing or eccentric compact binaries, can be automatically detected with this method with the same sensitivity. This is because, unlike templatematching techniques, deep learning can interpolate to points within the training data and generalize beyond it to some extent.
4) Resilience to nonGaussian noise: The results show that this deep learning method can distinguish signals from transient nonGaussian noises (glitches) and works even when a signal is contaminated by a glitch, unlike matched filtering. For instance, the occurrence of a glitch in coincidence with the recent detection of the neutron star merger delayed the analysis by several hours using existing methods and required manual inspection. The deep learning technique can automatically find these events and estimate their parameters.
5) Interpretability: Once the deep learning method detects a signal and predicts its parameters, this can be quickly crossvalidated using matched filtering with a few templates around these predicted parameters. Therefore, this can be seen as a method to accelerate matched filtering by narrowing down the search space—so the interpretability of the results is not lost.
I have been using Mathematica since I was an undergraduate at IIT Bombay. I have used it for symbolic calculation as well as numerical computation.
The Wolfram Language is very coherent, unlike other languages such as Python, and includes all the functionality across different domains of science and engineering without relying on any external packages that have to be loaded. All the 6,000 or so functions have explicit names and are designed with a very similar syntax, which means that most of the time you can simply guess the name and usage without referring to any documentation. The documentation is excellent, and it is all in one place.
Overall, the Wolfram Language saves a researcher’s time by a factor of 2–3x compared to other programming languages. This means you can do twice as much research. If everyone used Mathematica, we could double the progress of science!
I also used it for all my coursework, and submitted Mathematica notebooks exported into PDFs, while everyone else in my class was still writing things down with pen and paper.
The Wolfram Language neural network framework was extremely helpful for me. It is a very highlevel framework and doesn’t require you to worry about what is happening under the hood. Even someone with zero background in deep learning can use it successfully for their projects by simply referring to just the documentation.
Using GPUs to do training with the Wolfram Language was as simple as including the string TargetDevice>"GPU" in the code. With this small change, everything ran on GPUs like magic on any of my machines on Windows, OSX or Linux, including my laptop, Blue Waters, the Campus Cluster, the Volta and Pascal NVIDIA DGX1 deep learning supercomputers and the hybrid machine with four P100 GPUs at the NCSA Innovative Systems Lab.
I used about 12 GPUs in parallel to try out different neural network architectures as well.
I completed the whole project, including the research, writing the paper and posting on arXiv, within two weeks after I came up with the idea at SC16, even though I had never done any deep learning–related work before. This was only possible because I used the Wolfram Language.
I had drafted the initial version of the research paper as a Mathematica notebook. This allowed me to write paragraphs of text and typeset everything, even mathematical equations and figures, and organize into sections and subsections just like in a Word document. At the end, I could export everything into a LaTeX file and submit to the journal.
Everything, including the data preparation, preprocessing, training and inference with the deep convolutional neural nets, along with the preparation of figures and diagrams of the neural net architecture, was done with the Wolfram Language.
Apart from programming, I regularly use Mathematica notebooks as a word processor and to create slides for presentations. All this functionality is included with Mathematica.
Read the documentation, which is one of the greatest strengths of the language.
There are a lot of included examples about using deep learning for various types of problems, such as classification, regression in fields such as time series analysis, natural language processing, image processing, etc.
The Wolfram Neural Net Repository is a unique feature in the Wolfram Language that is super helpful. You can directly import stateoftheart neural network models that are pretrained for hundreds of different tasks and use them in your code. You can also perform “net surgery” on these models to customize them as you please for your research/applications.
The Mathematica Stack Exchange is a very helpful resource, as is the Fast Introduction for Programmers, along with Mathematica Programming—An Advanced Introduction by Leonid Shifrin.
Deep Learning for RealTime Gravitational Wave Detection and Parameter Estimation: Results with Advanced LIGO Data (Physics Letters B)
Glitch Classification and Clustering for LIGO with Deep Transfer Learning (NIPS 2017, Deep Learning for Physical Science)
Deep Neural Networks to Enable RealTime Multimessenger Astrophysics (Physics Review D)
]]>Last September we released Version 11.2 of the Wolfram Language and Mathematica—with all sorts of new functionality, including 100+ completely new functions. Version 11.2 was a big release. But today we’ve got a still bigger release: Version 11.3 that, among other things, includes nearly 120 completely new functions.
This June 23rd it’ll be 30 years since we released Version 1.0, and I’m very proud of the fact that we’ve now been able to maintain an accelerating rate of innovation and development for no less than three decades. Critical to this, of course, has been the fact that we use the Wolfram Language to develop the Wolfram Language—and indeed most of the things that we can now add in Version 11.3 are only possible because we’re making use of the huge stack of technology that we’ve been systematically building for more than 30 years.
We’ve always got a large pipeline of R&D underway, and our strategy for .1 versions is to use them to release everything that’s ready at a particular moment in time. Sometimes what’s in a .1 version may not completely fill out a new area, and some of the functions may be tagged as “experimental”. But our goal with .1 versions is to be able to deliver the latest fruits of our R&D efforts on as timely a basis as possible. Integer (.0) versions aim to be more systematic, and to provide full coverage of new areas, rounding out what has been delivered incrementally in .1 versions.
In addition to all the new functionality in 11.3, there’s a new element to our process. Starting a couple of months ago, we began livestreaming internal design review meetings that I held as we brought Version 11.3 to completion. So for those interested in “how the sausage is made”, there are now almost 122 hours of recorded meetings, from which you can find out exactly how some of the things you can now see released in Version 11.3 were originally invented. And in this post, I’m going to be linking to specific recorded livestreams relevant to features I’m discussing.
OK, so what’s new in Version 11.3? Well, a lot of things. And, by the way, Version 11.3 is available today on both desktop (Mac, Windows, Linux) and the Wolfram Cloud. (And yes, it takes extremely nontrivial software engineering, management and quality assurance to achieve simultaneous releases of this kind.)
In general terms, Version 11.3 not only adds some completely new directions, but also extends and strengthens what’s already there. There’s lots of strengthening of core functionality: still more automated machine learning, more robust data import, knowledgebase predictive prefetching, more visualization options, etc. There are all sorts of new conveniences: easier access to external languages, immediate input iconization, direct currying, etc. And we’ve also continued to aggressively push the envelope in all sorts of areas where we’ve had particularly active development in recent years: machine learning, neural nets, audio, asymptotic calculus, external language computation, etc.
Here’s a word cloud of new functions that got added in Version 11.3:
There are so many things to say about 11.3, it’s hard to know where to start. But let’s start with something topical: blockchain. As I’ll be explaining at much greater length in future posts, the Wolfram Language—with its builtin ability to talk about the real world—turns out to be uniquely suited to defining and executing computational smart contracts. The actual Wolfram Language computation for these contracts will (for now) happen off the blockchain, but it’s important for the language to be able to connect to blockchains—and that’s what’s being added in Version 11.3. [Livestreamed design discussion.]
The first thing we can do is just ask about blockchains that are out there in the world. Like here’s the most recent block added to the main Ethereum blockchain:
✕ BlockchainBlockData[1, BlockchainBase > "Ethereum"] 
Now we can pick up one of the transactions in that block, and start looking at it:
✕ BlockchainTransactionData[\ "735e1643c33c6a632adba18b5f321ce0e13b612c90a3b9372c7c9bef447c947c", BlockchainBase > "Ethereum"] 
And we can then start doing data science—or whatever analysis—we want about the structure and content of the blockchain. For the initial release of Version 11.3, we’re supporting Bitcoin and Ethereum, though other public blockchains will be added soon.
But already in Version 11.3, we’re supporting a private (Bitcoincore) Wolfram Blockchain that’s hosted in our Wolfram Cloud infrastructure. We’ll be periodically publishing hashes from this blockchain out in the world (probably in things like physical newspapers). And it’ll also be possible to run versions of it in private Wolfram Clouds.
It’s extremely easy to write something to the Wolfram Blockchain (and, yes, it charges a small number of Cloud Credits):
✕ BlockchainPut[Graphics[Circle[]]] 
The result is a transaction hash, which one can then look up on the blockchain:
✕ BlockchainTransactionData[\ "9db73562fb45a75dd810456d575abbeb313ac19a2ec5813974c108a6935fcfb9"] 
Here’s the circle back again from the blockchain:
✕

By the way, the Hash function in the Wolfram Language has been extended in 11.3 to immediately support the kinds of hashes (like “RIPEMD160SHA256”) that are used in cryptocurrency blockchains. And by using Encrypt and related functions, it’s possible to start setting up some fairly sophisticated things on the blockchain—with more coming soon.
Alright, so now let’s talk about something really big that’s new—at least in experimental form—in Version 11.3. One of our longterm goals in the Wolfram Language is to be able to compute about anything in the world. And in Version 11.3 we’re adding a major new class of things that we can compute about: complex engineering (and other) systems. [Livestreamed design discussions 1 and 2.]
Back in 2012 we introduced Wolfram SystemModeler: an industrialstrength system modeling environment that’s been used to model things like jet engines with tens of thousands of components. SystemModeler lets you both run simulations of models, and actually develop models using a sophisticated graphical interface.
What we’re adding (experimentally) in Version 11.3 is the builtin capability for the Wolfram Language to run models from SystemModeler—or in fact basically any model described in the Modelica language.
Let’s start with a simple example. This retrieves a particular model from our builtin repository of models:
✕
SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"] 
If you press the [+] you see more detail:
But the place where it gets really interesting is that you can actually run this model. SystemModelPlot makes a plot of a “standard simulation” of the model:
✕ SystemModelPlot[ SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"]] 
What actually is the model underneath? Well, it’s a set of equations that describe the dynamics of how the components of the system behave. And for a very simple system like this, these equations are already pretty complicated:
✕ SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"][\ "SystemEquations"] 
It comes with the territory in modeling realworld systems that there tend to be lots of components, with lots of complicated interactions. SystemModeler is set up to let people design arbitrarily complicated systems graphically, hierarchically connecting together components representing physical or other objects. But the big new thing is that once you have the model, then with Version 11.3 you can immediately work with it in the Wolfram Language.
Every model has lots of properties:
✕ [SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"] \ "Properties"] 
One of these properties gives the variables that characterize the system. And, yes, even in a very simple system like this, there are already lots of those:
✕ [SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"] \ "SystemVariables"] 
Here’s a plot of how one of those variables behaves in the simulation:
✕ SystemModelPlot[[SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"], "idealTriac.capacitor.p.i"]] 
A typical thing one wants to do is to investigate how the system behaves when parameters are changed. This simulates the system with one of its parameters changed, then makes a plot:
✕ SystemModelSimulate[[SystemModel["Modelica.Electrical.Analog.Examples.IdealTriacCircuit"]], {"V.freqHz" > 2.5}>] 
✕ SystemModelPlot[%, "idealTriac.capacitor.p.i"] 
We could go on from here to sample lots of different possible inputs or parameter values, and do things like studying the robustness of the system to changes. Version 11.3 provides a very rich environment for doing all these things as an integrated part of the Wolfram Language.
In 11.3 there are already over 1000 readytorun models included—of electrical, mechanical, thermal, hydraulic, biological and other systems. Here’s a slightly more complicated example—the core part of a car:
✕ SystemModel["IndustryExamples.AutomotiveTransportation.Driveline.\ DrivelineModel"] 
If you expand the icon, you can mouse over the parts to find out what they are:
This gives a quick summary of the model, showing that it involves 1110 variables:
✕ SystemModel["IndustryExamples.AutomotiveTransportation.Driveline.\ DrivelineModel"]["Summary"] 
In addition to complete readytorun models, there are also over 6000 components included in 11.3, from which models can be constructed. SystemModeler provides a full graphical environment for assembling these components. But one can also do it purely with Wolfram Language code, using functions like ConnectSystemModelComponents (which essentially defines the graph of how the connectors of different components are connected):
✕ components = {"R" \[Element] "Modelica.Electrical.Analog.Basic.Resistor", "L" \[Element] "Modelica.Electrical.Analog.Basic.Inductor", "AC" \[Element] "Modelica.Electrical.Analog.Sources.SineVoltage", "G" \[Element] "Modelica.Electrical.Analog.Basic.Ground"}; 
✕ connections = {"G.p" > "AC.n", "AC.p" > "L.n", "L.p" > "R.n", "R.p" > "AC.n"}; 
✕ model = ConnectSystemModelComponents[components, connections] 
You can also create models directly from their underlying equations, as well as making “blackbox models” purely from data or empirical functions (say from machine learning).
It’s taken a long time to build all the system modeling capabilities that we’re introducing in 11.3. And they rely on a lot of sophisticated features of the Wolfram Language—including largescale symbolic manipulation, the ability to robustly solve systems of differentialalgebraic equations, handling of quantities and units, and much more. But now that system modeling is integrated into the Wolfram Language, it opens all sorts of important new opportunities—not only in engineering, but in all fields that benefit from being able to readily simulate multicomponent realworld systems.
We first introduced notebooks in Version 1.0 back in 1988—so by now we’ve been polishing how they work for no less than 30 years. Version 11.3 introduces a number of new features. A simple one is that closed cell groups now by default have an “opener button”, as well as being openable using their cell brackets:
I find this helpful, because otherwise I sometimes don’t notice closed groups, with extra cells inside. (And, yes, if you don’t like it, you can always switch it off in the stylesheet.)
Another small but useful change is the introduction of “indefinite In/Out labels”. In a notebook that’s connected to an active kernel, successive cells are labeled In[1], Out[1], etc. But if one’s no longer connected to the same kernel (say, because one saved and reopened the notebook), the In/Out numbering no longer makes sense. So in the past, there were just no In, Out labels shown. But as of Version 11.3, there are still labels, but they’re grayed down, and they don’t have any explicit numbers in them:
Another new feature in Version 11.3 is Iconize. Here’s the basic problem it solves. Let’s say you’ve got some big piece of data or other input that you want to store in the notebook, but you don’t want it to visually fill up the notebook. Well, one thing you can do is to put it in closed cells. But then to use the data you have to do something like creating a variable and so on. Iconize provides a simple, inline way to save data in a notebook.
Here’s how you make an iconized version of an expression:
✕ Iconize[Range[10]] 
Now you can use this iconized form in place of giving the whole expression; it just immediately evaluates to the full expression:
✕ Reverse[{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}] 
Another convenient use of Iconize is to make code easier to read, while still being complete. For example, consider something like this:
✕ Plot[Sin[Tan[x]], {x, 0, 10}, Filling > Axis, PlotTheme > "Scientific"] 
You can select the options here, then go to the rightclick menu and say to Iconize them:
The result is an easiertoread piece of code—that still evaluates just as it did before:
✕ Plot[Sin[Tan[x]], {x, 0, 10}, Sequence[ Filling > Axis, PlotTheme > "Scientific"]] 
In Version 11.2 we introduced ExternalEvaluate, for evaluating code in external languages (initially Python and JavaScript) directly from the Wolfram Language. (This is supported on the desktop and in private clouds; for security and provisioning reasons, the public Wolfram Cloud only runs pure Wolfram Language code.)
In Version 11.3 we’re now making it even easier to enter external code in notebooks. Just start an input cell with a > and you’ll get an external code cell (you can stickily select the language you want):
✕ ExternalEvaluate["Python", "import platform; platform.platform()"] 
And, yes, what comes back is a Wolfram Language expression that you can compute with:
✕ StringSplit[%, ""] 
We put a lot of emphasis on documenting the Wolfram Language—and traditionally we’ve had basically three kinds of components to our documentation: “reference pages” that cover a single function, “guide pages” that give a summary with links to many functions, and “tutorials” that provide narrative introductions to areas of functionality. Well, as of Version 11.3 there’s a fourth kind of component: workflows—which is what the gray tiles at the bottom of the “root guide page” lead to.
When everything you’re doing is represented by explicit Wolfram Language code, the In/Out paradigm of notebooks is a great way to show what’s going on. But if you’re clicking around, or, worse, using external programs, this isn’t enough. And that’s where workflows come in—because they use all sorts of graphical devices to present sequences of actions that aren’t just entering Wolfram Language input.
So if you’re getting coordinates from a plot, or deploying a complex form to the web, or adding a banner to a notebook, then expect to follow the new workflow documentation that we have. And, by the way, you’ll find links to relevant workflows from reference pages for functions.
Another big new interfacerelated thing in Version 11.3 is Presenter Tools—a complete environment for creating and running presentations that include live interactivity. What makes Presenter Tools possible is the rich notebook system that we’ve built over the past 30 years. But what it does is to add all the features one needs to conveniently create and run really great presentations.
People have been using our previous SlideShow format to give presentations with Wolfram Notebooks for about 20 years. But it was never a complete solution. Yes, it provided nice notebook features like live computation in a slide show environment, but it didn’t do “PowerPointlike” things such as automatically scaling content to screen resolution. To be fair, we expected that operating systems would just intrinsically solve problems like content scaling. But it’s been 20 years and they still haven’t. So now we’ve built the new Presenter Tools that both solves such problems, and adds a whole range of features to create great presentations with notebooks as easy as possible.
To start, just choose File > New > Presenter Notebook. Then pick your template and theme, and you’re off and running:
Here’s what it looks like when you’re editing your presentation (and you can change themes whenever you want):
When you’re ready to present, just press Start Presentation. Everything goes full screen and is automatically scaled to the resolution of the screen you’re using. But here’s the big difference from PowerPointlike systems: everything is live, interactive, editable, and scrollable. For example, you can have a Manipulate right inside a slide, and you can immediately interact with it. (Oh, and everything can be dynamic, say recreating graphics based on data that’s being imported in real time.) You can also use things like cell groups to organize content in slides. And you can edit what’s on a slide, and for example, do livecoding, running your code as you go.
When you’re ready to go to a new slide, just press a single key (or have your remote do it for you). By default, the key is Page Down (so you can still use arrow keys in editing), but you can set a different key if you want. You can have Presenter Tools show your slides on one display, then display notes and controls on another display. When you make your slides, you can include SideNotes and SideCode. SideNotes are “PowerPointlike” textual notes. But SideCode is something different. It’s actually based on something I’ve done in my own talks for years. It’s code you’ve prepared, that you can “magically” insert onto a slide in real time during your presentation, immediately evaluating it if you want.
I’ve given a huge number of talks using Wolfram Notebooks over the years. A few times I’ve used the SlideShow format, but mostly I’ve just done everything in an ordinary notebook, often keeping notes on a separate device. But now I’m excited that with Version 11.3 I’ve got basically exactly the tools I need to prepare and present talks. I can predefine some of the content and structure, but then the actual talk can be very dynamic and spontaneous—with live editing, livecoding and all sorts of interactivity.
While we’re discussing interface capabilities, here’s another new one: Wolfram Chat. When people are interactively working together on something, it’s common to hear someone say “let me just send you a piece of code” or “let me send you a Manipulate”. Well, in Version 11.3 there’s now a very convenient way to do this, built directly into the Wolfram Notebook system—and it’s called Wolfram Chat. [Livestreamed design discussion.]
Just select File > New > Chat; you’ll get asked who you want to “chat with”—and it could be anyone anywhere with a Wolfram ID (though of course they do have to accept your invitation):
Then you can start a chat session, and, for example, put it alongside an ordinary notebook:
The neat thing is that you can send anything that can appear in a notebook, including images, code, dynamic objects, etc. (though it’s sandboxed so people can’t send “code bombs” to each other).
There are lots of obvious applications of Wolfram Chat, not only in collaboration, but also in things like classroom settings and technical support. And there are some other applications too. Like for running livecoding competitions. And in fact one of the ways we stresstested Wolfram Chat during development was to use it for the livecoding competition at the Wolfram Technology Conference last fall.
One might think that chat is something straightforward. But actually it’s surprisingly tricky, with a remarkable number of different situations and cases to cover. Under the hood, Wolfram Chat is using both the Wolfram Cloud and the new pubsub channel framework that we introduced in Version 11.0. In Version 11.3, Wolfram Chat is only being supported for desktop Wolfram Notebooks, but it’ll be coming soon to notebooks on the web and on mobile.
We’re always polishing the Wolfram Language to make it more convenient and productive to use. And one way we do this is by adding new little “convenience functions” in every version of the language. Often what these functions do is pretty straightforward; the challenge (which has often taken years) is to come up with really clean designs for them. (You can see quite a bit of the discussion about the new convenience functions for Version 11.3 in livestreams we’ve done recently.)
Here’s a function that it’s sort of amazing we’ve never explicitly had before—a function that just constructs an expression from its head and arguments:
✕ Construct[f, x, y] 
Why is this useful? Well, it can save explicitly constructing pure functions with Function or &, for example in a case like this:
✕ Fold[Construct, f, {a, b, c}] 
Another function that at some level is very straightforward (but about whose name we agonized for quite a while) is Curry. Curry (named after “currying”, which is in turn named after Haskell Curry) essentially makes operator forms, with Curry[f,n] “currying in” n arguments:
✕ Curry[f, 3][a][b][c][d][e] 
The oneargument form of Curry itself is:
✕ Curry[f][x][y] 
Why is this useful? Well, some functions (like Select, say) have builtin “operator forms”, in which you give one argument, then you “curry in” others:
✕ Select[# > 5 &][Range[10]] 
But what if you wanted to create an operator form yourself? Well, you could always explicitly construct it using Function or &. But with Curry you don’t need to do that. Like here’s an operator form of D, in which the second argument is specified to be x:
✕ Curry[D][x] 
Now we can apply this operator form to actually do differentiation with respect to x:
✕ %[f[x]] 
Yes, Curry is at some level rather abstract. But it’s a nice convenience if you understand it—and understanding it is a good exercise in understanding the symbolic structure of the Wolfram Language.
Talking of operator forms, by the way, NearestTo is an operatorform analog of Nearest (the oneargument form of Nearest itself generates a NearestFunction):
✕ NearestTo[2.3][{1, 2, 3, 4, 5}] 
Here’s an example of why this is useful. This finds the 5 chemical elements whose densities are nearest to 10 g/cc:
✕ Entity["Element", "Density" > NearestTo[\!\(\* NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "10 g/cc", Typeset`boxes$$ = TemplateBox[{"10", RowBox[{"\"g\"", " ", "\"/\"", " ", SuperscriptBox["\"cm\"", "3"]}], "grams per centimeter cubed", FractionBox["\"Grams\"", SuperscriptBox["\"Centimeters\"", "3"]]}, "Quantity", SyntaxForm > Mod], Typeset`allassumptions$$ = {}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.777394`6.342186177878503, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{94., {8., 19.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\), 5]] // EntityList 
In Version 10.1 in 2015 we introduced a bunch of functions that operate on sequences in lists. Version 11.3 adds a couple more such functions. One is SequenceSplit. It’s like StringSplit for lists: it splits lists at the positions of particular sequences:
✕ uenceSplit[{a, b, x, x, c, d, x, e, x, x, a, b}, {x, x}] 
Also new in the “Sequence family” is the function SequenceReplace:
✕ SequenceReplace[{a, b, x, x, c, d, x, e, x, x, a, b}, {x, n_} > {n, n, n}] 
Just as we’re always polishing the core programming functionality of the Wolfram Language, we’re also always polishing things like visualization.
In Version 11.0, we added GeoHistogram, here showing “volcano density” in the US:
✕ GeoHistogram[GeoPosition[GeoEntities[\!\(\* NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "USA", Typeset`boxes$$ = TemplateBox[{"\"United States\"", RowBox[{"Entity", "[", RowBox[{"\"Country\"", ",", "\"UnitedStates\""}], "]"}], "\"Entity[\\\"Country\\\", \\\"UnitedStates\\\"]\"", "\"country\""}, "Entity"], Typeset`allassumptions$$ = {{ "type" > "Clash", "word" > "USA", "template" > "Assuming \"${word}\" is ${desc1}. Use as \ ${desc2} instead", "count" > "2", "Values" > {{ "name" > "Country", "desc" > "a country", "input" > "*C.USA_*Country"}, { "name" > "FileFormat", "desc" > "a file format", "input" > "*C.USA_*FileFormat"}}}}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.373096`6.02336558644664, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{197., {7., 16.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\), "Volcano"]]] 
In Version 11.3, we’ve added GeoSmoothHistogram:
✕ GeoSmoothHistogram[GeoPosition[GeoEntities[\!\(\* NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "USA", Typeset`boxes$$ = TemplateBox[{"\"United States\"", RowBox[{"Entity", "[", RowBox[{"\"Country\"", ",", "\"UnitedStates\""}], "]"}], "\"Entity[\\\"Country\\\", \\\"UnitedStates\\\"]\"", "\"country\""}, "Entity"], Typeset`allassumptions$$ = {{ "type" > "Clash", "word" > "USA", "template" > "Assuming \"${word}\" is ${desc1}. Use as \ ${desc2} instead", "count" > "2", "Values" > {{ "name" > "Country", "desc" > "a country", "input" > "*C.USA_*Country"}, { "name" > "FileFormat", "desc" > "a file format", "input" > "*C.USA_*FileFormat"}}}}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.373096`6.02336558644664, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{197., {7., 16.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\), "Volcano"]]] 
Also new in Version 11.3 are callouts in 3D plots, here random words labeling random points (but note how the words are positioned to avoid each other):
✕ ListPointPlot3D[Table[Callout[RandomReal[10, 3], RandomWord[]], 25]] 
We can make a slightly more meaningful plot of words in 3D by using the new machinelearningbased FeatureSpacePlot3D (notice for example that “vocalizing” and “crooning” appropriately end up close together):
✕ FeatureSpacePlot3D[RandomWord[20]] 
Talking of machine learning, Version 11.3 continues our aggressive development of automated machine learning, building both general tools, and specific functions that make use of machine learning.
An interesting example of a new function is FindTextualAnswer, which takes a piece of text, and tries to find answers to textual questions. Here we’re using the Wikipedia article on “rhinoceros”, asking how much a rhino weighs:
✕ FindTextualAnswer[ WikipediaData["rhinoceros"], "How much does a rhino weigh?"] 
It almost seems like magic. Of course it doesn’t always work, and it can do things that we humans would consider pretty stupid. But it’s using very stateoftheart machine learning methodology, together with a lot of unique training data based on WolframAlpha. We can see a little more of what it does if we ask not just for its top answer about rhino weights, but for its top 5:
✕ FindTextualAnswer[ WikipediaData["rhinoceros"], "How much does a rhino weigh?", 5] 
Hmmm. So what’s a more definitive answer? Well, for that we can use our actual curated knowledgebase:
✕ \!\( NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "rhino weight", Typeset`boxes$$ = RowBox[{ TemplateBox[{"\"rhinoceroses\"", RowBox[{"Entity", "[", RowBox[{"\"Species\"", ",", "\"Family:Rhinocerotidae\""}], "]"}], "\"Entity[\\\"Species\\\", \\\"Family:Rhinocerotidae\\\"]\"", "\"species specification\""}, "Entity"], "[", TemplateBox[{"\"weight\"", RowBox[{"EntityProperty", "[", RowBox[{"\"Species\"", ",", "\"Weight\""}], "]"}], "\"EntityProperty[\\\"Species\\\", \\\"Weight\\\"]\""}, "EntityProperty"], "]"}], Typeset`allassumptions$$ = {{ "type" > "MultiClash", "word" > "", "template" > "Assuming ${word1} is referring to ${desc1}. Use \ \"${word2}\" as ${desc2}. Use \"${word3}\" as ${desc3}.", "count" > "3", "Values" > {{ "name" > "Species", "word" > "rhino", "desc" > "a species specification", "input" > "*MC.%7E_*Species"}, { "name" > "Person", "word" > "rhino", "desc" > "a person", "input" > "*MC.%7E_*Person"}, { "name" > "Formula", "word" > "", "desc" > "a formula", "input" > "*MC.%7E_*Formula"}}}}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.812573`6.361407381082941, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{96., {7., 16.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\) 
Or in tons:
✕ UnitConvert[%, \!\(\* NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "tons", Typeset`boxes$$ = TemplateBox[{ InterpretationBox[" ", 1], "\"sh tn\"", "short tons", "\"ShortTons\""}, "Quantity", SyntaxForm > Mod], Typeset`allassumptions$$ = {{ "type" > "Clash", "word" > "tons", "template" > "Assuming \"${word}\" is ${desc1}. Use as \ ${desc2} instead", "count" > "2", "Values" > {{ "name" > "Unit", "desc" > "a unit", "input" > "*C.tons_*Unit"}, { "name" > "Word", "desc" > "a word", "input" > "*C.tons_*Word"}}}, { "type" > "Unit", "word" > "tons", "template" > "Assuming ${desc1} for \"${word}\". Use ${desc2} \ instead", "count" > "10", "Values" > {{ "name" > "ShortTons", "desc" > "short tons", "input" > "UnitClash_*tons.*ShortTons"}, { "name" > "LongTons", "desc" > "long tons", "input" > "UnitClash_*tons.*LongTons"}, { "name" > "MetricTons", "desc" > "metric tons", "input" > "UnitClash_*tons.*MetricTons"}, { "name" > "ShortTonsForce", "desc" > "short tonsforce", "input" > "UnitClash_*tons.*ShortTonsForce"}, { "name" > "TonsOfTNT", "desc" > "tons of TNT", "input" > "UnitClash_*tons.*TonsOfTNT"}, { "name" > "DisplacementTons", "desc" > "displacement tons", "input" > "UnitClash_*tons.*DisplacementTons"}, { "name" > "LongTonsForce", "desc" > "long tonsforce", "input" > "UnitClash_*tons.*LongTonsForce"}, { "name" > "MetricTonsForce", "desc" > "metric tonsforce", "input" > "UnitClash_*tons.*MetricTonsForce"}, { "name" > "TonsOfRefrigerationUS", "desc" > "US commercial tons of refrigeration", "input" > "UnitClash_*tons.*TonsOfRefrigerationUS"}, { "name" > "TonsOfRefrigerationUKCommercial", "desc" > "UK commercial tons of refrigeration (power)", "input" > "UnitClash_*tons.*\ TonsOfRefrigerationUKCommercial"}}}}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.303144`5.933193970346431, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{47., {7., 16.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\)] 
FindTextualAnswer is no substitute for our whole data curation and computable data strategy. But it’s useful as a way to quickly get a first guess of an answer, even from completely unstructured text. And, yes, it should do well at critical reading exercises, and could probably be made to do well at Jeopardy! too.
We humans respond a lot to human faces, and with modern machine learning it’s possible to do all sorts of facerelated computations—and in Version 11.3 we’ve added systematic functions for this. Here FindFaces pulls out faces (of famous physicists) from a photograph:
✕ FindFaces[CloudGet["https://wolfr.am/sWoDYqbb"], "Image"] 
FacialFeatures uses machine learning methods to estimate various attributes of faces (such as the apparent age, apparent gender and emotional state):
✕ FacialFeatures[CloudGet["https://wolfr.am/sWRQARe8"]]//Dataset 
These features can for example be used as criteria in FindFaces, here picking out physicists who appear to be under 40:
✕ FindFaces[CloudGet["https://wolfr.am/sWoDYqbb"], #Age < 40 &, "Image"] 
There are now all sorts of functions in the Wolfram Language (like FacialFeatures) that use neural networks inside. But for several years we’ve also been energetically building a whole subsystem in the Wolfram Language to let people work directly with neural networks. We’ve been building on top of lowlevel libraries (particularly MXNet, to which we’ve been big contributors), so we can make use of all the latest GPU and other optimizations. But our goal is to build a highlevel symbolic layer that makes it as easy as possible to actually set up neural net computations. [Livestreamed design discussions 1, 2 and 3.]
There are many parts to this. Setting up automatic encoding and decoding to standard Wolfram Language constructs for text, images, audio and so on. Automatically being able to knit together individual neural net operations, particularly ones that deal with things like sequences. Being able to automate training as much as possible, including automatically doing hyperparameter optimization.
But there’s something perhaps even more important too: having a large library of existing, trained (and untrained) neural nets, that can both be used directly for computations, and can be used for transfer learning, or as feature extractors. And to achieve this, we’ve been building our Neural Net Repository:
There are networks here that do all sorts of remarkable things. And we’re adding new networks every week. Each network has its own page, that includes examples and detailed information. The networks are stored in the cloud. But all you have to do to pull them into your computation is to use NetModel:
✕ NetModel["3D Face Alignment Net Trained on 300W Large Pose Data"] 
Here’s the actual network used by FindTextualAnswer:
✕ NetModel["Wolfram FindTextualAnswer Net for WL 11.3"] 
One thing that’s new in Version 11.3 is the iconic representation we’re using for networks. We’ve optimized it to give you a good overall view of the structure of net graphs, but then to allow interactive drilldown to any level of detail. And when you train a neural network, the interactive panels that come up have some spiffy new features—and with NetTrainResultsObject, we’ve now made the actual training process itself computable.
Version 11.3 has some new layer types like CTCLossLayer (particularly to support audio), as well as lots of updates and enhancements to existing layer types (10x faster LSTMs on GPUs, automatic variablelength convolutions, extensions of many layers to support arbitrarydimension inputs, etc.). In Version 11.3 we’ve had a particular focus on recurrent networks and sequence generation. And to support this, we’ve introduced things like NetStateObject—that basically allows a network to have a persistent state that’s updated as a result of input data the network receives.
In developing our symbolic neural net framework we’re really going in two directions. The first is to make everything more and more automated, so it’s easier and easier to set up neural net systems. But the second is to be able to readily handle more and more neural net structures. And in Version 11.3 we’re adding a whole collection of “network surgery” functions—like NetTake, NetJoin and NetFlatten—to let you go in and tweak and hack neural nets however you want. Of course, our system is designed so that even if you do this, our whole automated system—with training and so on—still works just fine.
For more than 30 years, we’ve been on a mission to make as much mathematics as possible computational. And in Version 11.3 we’ve finally started to crack an important holdout area: asymptotic analysis.
Here’s a simple example: find an approximate solution to a differential equation near x = 0:
✕ AsymptoticDSolveValue[x^2 y'[x] + (x^2 + 1) y[x] == 0, y[x], {x, 0, 10}] 
At first, this might just look like a power series solution. But look more carefully: there’s an e^{(1/x)} factor that would just give infinity at every order as a power series in x. But with Version 11.3, we’ve now got asymptotic analysis functions that handle all sorts of scales of growth and oscillation, not just powers.
Back when I made my living as a physicist, it always seemed like some of the most powerful dark arts centered around perturbation methods. There were regular perturbations and singular perturbations. There were things like the WKB method, and the boundary layer method. The point was always to compute an expansion in some small parameter, but it seemed to always require different trickery in different cases to achieve it. But now, after a few decades of work, we finally in Version 11.3 have a systematic way to solve these problems. Like here’s a differential equation where we’re looking for the solution for small ε:
✕ AsymptoticDSolveValue[{\[Epsilon] y''[x] + (x + 1) y[x] == 0, y[0] == 1, y[1] == 0}, y[x], x, {\[Epsilon], 0, 2}] 
Back in Version 11.2, we added a lot of capabilities for dealing with more sophisticated limits. But with our asymptotic analysis techniques we’re now also able to do something else, that’s highly relevant for all sorts of problems in areas like number theory and computational complexity theory, which is to compare asymptotic growth rates.
This is asking: is 2^{nk} asymptotically less than (n^{m})! as n>∞? The result: yes, subject to certain conditions:
✕ AsymptoticLess[ 2^n^k, (n^m)!, n > \[Infinity]] 
One of the features of WolframAlpha popular among students is its “Show Steps” functionality, in which it synthesizes “onthefly tutorials” showing how to derive answers it gives. But what actually are the steps, in, say, a Show Steps result for algebra? Well, they’re “elementary operations” like “add the corresponding sides of two equations”. And in Version 11.3, we’re including functions to just directly do things like this:
✕ AddSides[a == b, c == d] 
✕ MultiplySides[a == b, c == d] 
And, OK, it seems like these are really trivial functions, that basically just operate on the structure of equations. And that’s actually what I thought when I said we should implement them. But as our Algebra R&D team quickly pointed out, there are all sorts of gotchas (“what if b is negative?”, etc.), that are what students often get wrong—but that with all of the algorithmic infrastructure in the Wolfram Language it’s easy for us to get right:
✕ MultiplySides[x/b > 7, b] 
The Wolfram Language is mostly about computing results. But given a result, one can also ask why it’s correct: one can ask for some kind of proof that demonstrates that it’s correct. And for more than 20 years I’ve been wondering how to find and represent general proofs in a useful and computable way in the Wolfram Language. And I’m excited that finally in Version 11.3 the function FindEquationalProof provides an example—which we’ll be generalizing and building on in future versions. [Livestreamed design discussion.]
My alltime favorite success story for automated theorem proving is the tiny (and in fact provably simplest) axiom system for Boolean algebra that I found in 2000. It’s just a single axiom, with a single operator that one can think of as corresponding to the Nand operation. For 11 years, FullSimplify has actually been able to use automated theoremproving methods inside, to be able to compute things. So here it’s starting from my axiom for Boolean algebra, then computing that Nand is commutative:
✕ FullSimplify[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]] 
But this just tells us the result; it doesn’t give any kind of proof. Well, in Version 11.3, we can now get a proof:
✕ proof = FindEquationalProof[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]] 
What is the proof object? We can see from the summary that the proof takes 102 steps. Then we can ask for a “proof graph”. The green arrow at the top represents the original axiom; the red square at the bottom represents the thing being proved. All the nodes in the middle are intermediate lemmas, proved from each other according to the connections shown.
✕ proof = FindEquationalProof[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]]; proof["ProofGraph"] 
What’s actually in the proof? Well, it’s complicated. But here’s a dataset that gives all the details:
✕ proof = FindEquationalProof[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]]; proof["ProofDataset"] 
You can get a somewhat more narrative form as a notebook too:
✕ proof = FindEquationalProof[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]]; proof["ProofNotebook"] 
And then you can also get a “proof function”, which is a piece of code that can be executed to verify the result:
✕ proof = FindEquationalProof[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]]; proof["ProofFunction"] 
Unsurprisingly, and unexcitingly, it gives True if you run it:
✕ proof = FindEquationalProof[nand[p, q] == nand[q, p], ForAll[{a, b, c}, nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c]]; proof["ProofFunction"][] 
Now that we can actually generate symbolic proof structures in the Wolfram Language, there’s a lot of empirical metamathematics to do—as I’ll discuss in a future post. But given that FindEquationalProof works on arbitrary “equationlike” symbolic relations, it can actually be applied to lots of things—like verifying protocols and policies, for example in popular areas like blockchain.
The Wolfram Knowledgebase grows every single day—partly through systematic data feeds, and partly through new curated data and domains being explicitly added. If one asks what happens to have been added between Version 11.2 and Version 11.3, it’s a slightly strange grab bag. There are 150+ new properties about public companies. There are 900 new named features on Pluto and Mercury. There are 16,000 new anatomical structures, such as nerve pathways. There are nearly 500 new “notable graphs”. There are thousands of new mountains, islands, notable buildings, and other georelated features. There are lots of new properties of foods, and new connections to diseases. And much more.
But in terms of typical everyday use of the Wolfram Knowledgebase the most important new feature in Version 11.3 is the entity prefetching system. The knowledgebase is obviously big, and it’s stored in the cloud. But if you’re using a desktop system, the data you need is “magically” downloaded for you.
Well, in Version 11.3, the magic got considerably stronger. Because now when you ask for one particular item, the system will try to figure out what you’re likely to ask for next, and it’ll automatically start asynchronously prefetching it, so when you actually ask for it, it’ll already be there on your computer—and you won’t have to wait for it to download from the cloud. (If you want to do the prefetching “by hand”, there’s the function EntityPrefetch to do it. Note that if you’re using the Wolfram Language in the cloud, the knowledgebase is already “right there”, so there’s no downloading or prefetching to do.)
The whole prefetching mechanism is applied quite generally. So, for example, if you use Interpreter to interpret some input (say, US state abbreviations), information about how to do the interpretations will also get prefetched—so if you’re using the desktop, the interpretations can be done locally without having to communicate with the cloud.
You’ve been able to send email from the Wolfram Language (using SendMail) for a decade. But starting in Version 11.3, it can use full HTML formatting, and you can embed lots of things in it—not just graphics and images, but also cloud objects, datasets, audio and so on. [Livestreamed design discussion.]
Version 11.3 also introduces the ability to send text messages (SMS and MMS) using SendMessage. For security reasons, though, you can only send to your own mobile number, as given by the value of $MobilePhone (and, yes, obviously, the number gets validated).
The Wolfram Language has been able to import mail messages and mailboxes for a long time, and with MailReceiverFunction it’s also able to respond to incoming mail. But in Version 11.3 something new that’s been added is the capability to deal with live mailboxes.
First, connect to an (IMAP, for now) mail server (I’m not showing the authentication dialog that comes up):
✕ mail = MailServerConnect[] 
Then you can basically use the Wolfram Language as a programmable mail client. This gives you a dataset of current unread messages in your mailbox:
✕ MailSearch[ "fahim">] 
Now we can pick out one of these messages, and we get a symbolic MailItem object, that for example we can delete:
✕ MailSearch[ "fahim">][[1]] 
✕ MailExecute["Delete", %%["MailItem"]] 
Version 11.3 supports a lot of new systemslevel operations. Let’s start with a simple but useful one: remote program execution. The function RemoteRun is basically like Unix rsh: you give it a host name (or IP address) and it runs a command there. The Authentication option lets you specify a username and password. If you want to run a persistent program remotely, you can now do that with RemoteRunProcess, which is the remote analog of the local RunProcess.
In dealing with remote computer systems, authentication is always an issue—and for several years we’ve been building a progressively more sophisticated symbolic authentication framework in the Wolfram Language. In Version 11.3 there’s a new AuthenticationDialog function, which pops up a whole variety of appropriately configured authentication dialogs. Then there’s GenerateSecuredAuthenticationKey—which generates OAuth SecuredAuthenticationKey objects that people can use to authenticate calls into the Wolfram Cloud from the outside.
Also at a systems level, there are some new import/export formats, like BSON (JSONlike binary serialization format) and WARC (web archive format). There are also HTTPResponse and HTTPRequest formats, that (among many other things) you can use to basically write a web server in the Wolfram Language in a couple of lines.
We introduced ByteArray objects into the Wolfram Language quite a few years ago—and we’ve been steadily growing support for them. In Version 11.3, there are BaseEncode and BaseDecode for converting between byte arrays and Base64 strings. Version 11.3 also extends Hash (which, among other things, works on byte arrays), adding various types of hashing (such as double SHA256 and RIPEMD) that are used for modern blockchain and cryptocurrency purposes.
We’re always adding more kinds of data that we can make computable in the Wolfram Language, and in Version 11.3 one addition is system process data, of the sort that you might get from a Unix ps command:
✕ SystemProcessData[] 
Needless to say, you can do very detailed searches for processes with specific properties. You can also use SystemProcesses to get an explicit list of ProcessObject symbolic objects, which you can interrogate and manipulate (for example, by using KillProcess).
✕ RandomSample[SystemProcesses[], 3] 
Of course, because everything is computable, it’s easy to do things like make plots of the start times of processes running on your computer (and, yes, I last rebooted a few days ago):
✕ TimelinePlot[SystemProcessData[][All, "StartTime"]] 
If you want to understand what’s going on around your computer, Version 11.3 provides another powerful tool: NetworkPacketRecording. You may have to do some permissions setup, but then this function can record network packets going through any network interface on your computer.
Here’s just 0.1 seconds of packets going in and out of my computer as I quietly sit here writing this post:
✕ NetworkPacketRecording[.1] 
You can drill down to look at each packet; here’s the first one that was recorded:
✕ NetworkPacketRecording[.1][[1]] 
Why is this interesting? Well, I expect to use it for debugging quite regularly—and it’s also useful for studying computer security, not least because you can immediately feed everything into standard Wolfram Language visualization, machine learning and other functionality.
This is already a long post—but there are lots of other things in 11.3 that I haven’t even mentioned. For example, there’ve been all sorts of updates for importing and exporting. Like much more efficient and robust XLS, CSV, and TSV import. Or export of animated PNGs. Or support for metadata in sound formats like MP3 and WAV. Or more sophisticated color quantization in GIF, TIFF, etc. [Livestreamed design discussions 1 and 2.]
We introduced symbolic Audio objects in 11.0, and we’ve been energetically developing audio functionality ever since. Version 11.3 has made audio capture more robust (and supported it for the first time on Linux). It’s also introduced functions like AudioPlay, AudioPause and AudioStop that control open AudioStream objects.
Also new is AudioDistance, which supports various distance measures for audio. Meanwhile, AudioIntervals can now automatically break audio into sections that are separated by silence. And, in a somewhat different area, $VoiceStyles gives the list of possible voices available for SpeechSynthesize.
Here’s a little new math function—that in this case gives a sequence of 0s and 1s in which every length4 block appears exactly once:
✕ DeBruijnSequence[{0, 1}, 4] 
The Wolfram Language now has sophisticated support for quantities and units—both explicit quantities (like 2.5 kg) and symbolic “quantity variables” (“p which has units of pressure”). But once you’re inside, doing something like solving an equation, you typically want to “factor the units out”. And in 11.3 there’s now a function that systematically does this: NondimensionalizationTransform. There’s also a new mechanism in 11.3 for introducing new kinds of quantities, using IndependentPhysicalQuantity.
Much of the builtin Wolfram Knowledgebase is ultimately represented in terms of entity stores, and in Version 11 we introduced an explicit EntityStore construct for defining new entity stores. Version 11.3 introduces the function EntityRegister, which lets you register an entity store, so that you can refer to the types of entities it contains just like you would refer to builtin types of entities (like cities or chemicals).
Another thing that’s being introduced as an experiment in Version 11.3 is the MongoLink package, which supports connection to external MongoDB databases. We use MongoLink ourselves to manage terabyteandbeyond datasets for things like machine learning training. And in fact MongoLink is part of our largescale development effort—whose results will be seen in future versions—to seamlessly support extremely large amounts of externally stored data.
In Version 11.2 we introduced ExternalEvaluate to run code in external languages like Python. In Version 11.3 we’re experimenting with generalizing ExternalEvaluate to control web browsers, by setting up a WebDriver framework. You can give all sorts of commands, both ones that have the same effect as clicking around an actual web browser, and ones that extract things you can see on the page.
Here’s how you can use Chrome (we support both it and Firefox) to open a webpage, then capture it:
✕ ExternalEvaluate["WebDriverChrome", {"OpenWebPage" > "https://www.wolfram.com", "CaptureWebPage"}]//Last 
Well, this post is getting long, but there’s certainly more I could say. Here’s a more complete list of functions that are new or updated in Version 11.3:
But to me it’s remarkable how much there is that’s in a .1 release of the Wolfram Language—and that’s emerged in just the few months since the last .1 release. It’s a satisfying indication of the volume of R&D that we’re managing to complete—by building on the whole Wolfram Language technology stack that we’ve created. And, yes, even in 11.3 there are a great many new corners to explore. And I hope that lots of people will do this, and will use the latest tools we’ve created to discover and invent all sorts of new and important things in the world.
]]>
While the Wolfram Language has extensive functionality for web operations, this example requires only the most basic: Import. By default, Import will grab the entire plaintext of a page:
✕
url = "http://www.wrh.noaa.gov/forecast/wxtables/index.php?lat=38.02&\ lon=122.13"; 
✕
Import[url] 
Sometimes plaintext scraping is a good start (e.g. for a text analysis workflow). But it’s important to remember there’s a layer of structured HTML telling your browser how to display everything. The elements we use as visual cues can also help the computer organize data, in many cases better and faster than our eyes.
In this case, we are trying to get data from a table. Information presented in tabular format is often stored in list and table HTML elements. You can extract all of the lists and tables on a page using the “Data” element of Import:
✕
data = Import[url, "Data"] 
Now that you have a list of elements, you can sift through to pick out the information you need. For visually inspecting a list like this, syntax highlighting can save a lot of time (and eye strain!). In the Wolfram Language, placing the cursor directly inside any grouping symbol—parentheses, brackets or in this case, curly braces—highlights that symbol, along with its opening/closing counterpart. Examining these sublists is an easy way to get a feel for the structure of the overall list. Clicking inside the first inner brace of the imported data shows that the first element is a list of links from the navigation bar:
This means the list of actual weather information (precipitation, temperature, humidity, wind, etc.) is located in the second element. By successively clicking inside curly braces, you can find the smallest list that contains all the weather data—unsurprisingly, it’s the one that starts with “Custom Weather Forecast Table”:
✕
data[[2]] 
Now use FirstPosition to get the correct list indices:
✕
FirstPosition[data, "Custom Weather Forecast Table"] 
Dropping the final index to go up one level, here’s the full table:
✕
table = data[[2, 2, 1, 2]] 
Now that you have the data, you can do some analysis. On the original webpage, some rows of the table only have one value per day, while the others have four. In the imported data, this translates to differing row lengths—either seven items or 28, with optional row headings:
✕
Length /@ table 
So if you want tomorrow’s temperatures, you can find the row with the appropriate heading and take the first four entries after the heading:
✕
FirstPosition[table, "Temp"] 
✕
table[[10, 2 ;; 5]] 
Conveniently, the temperature data is recognized as numerical, so it’s easy to pass directly to functions. Here is the Mean of all temperatures for the week (I use Rest to omit the row labels that start each list):
✕
N@Mean@Rest@table[[10]] 
And here’s a ListLinePlot of all temperatures for the week:
✕
ListLinePlot[Rest@table[[10]]] 
Interpreter can be used for parsing other data types. For a simple example, take the various weather elements that are reported as percentages:
✕
percents = table[[{5, 11, 13}]] 
These values are currently represented as strings, which aren’t friendly to numerical computations. Applying Interpreter["Percent"] automatically converts each value to a numerical Quantity with percent as the unit:
✕
{precip, clouds, humidity} = Interpreter["Percent"] /@ (Rest /@ percents) 
Now that they’re recognized as percentages, you can plot them together:
✕
labels = First /@ percents 
✕
ListLinePlot[{precip, clouds, humidity}, PlotLabels > labels] 
By extracting the date and time information attached to those values and parsing them with DateObject, you can convert the data into a TimeSeries object:
✕
dates = DateObject /@ Flatten@Table[ table[[2, j]] <> " " <> i, {j, Length@table[[2]]}, {i, table[[9, 2 ;; 5]]}]; 
✕
ts = TimeSeries /@ (Transpose[{dates, #}] & /@ {precip, clouds, humidity}); 
This is perfect for a DateListPlot, which labels the x axis with dates:
✕
DateListPlot[ts, PlotLabels > labels] 
Getting the data you need is easy with the Wolfram Language, but that’s just the beginning of the story! With our integrated data framework, you can do so much more: automate the import process, simplify data access and even create your own permanent data resources.
In my next post, I’ll explore some advanced structuring and cleaning techniques, demonstrating how to create a structured dataset from scraped data.
Author Stephen Lynch provides an introduction to the theory of dynamical systems. With the aid of Mathematica, this book’s handson approach first deals with continuous systems using ordinary differential equations, while the second part is devoted to the study of discrete dynamical systems. Lynch takes the reader from basic theory to recently published research material. Emphasized throughout are numerous applications to biology, chemical kinetics, economics, electronics, epidemiology, nonlinear optics, mechanics, population dynamics and neural networks.
Groups and Manifolds is an introduction to the mathematics of symmetry, with a variety of examples for physicists. Authors Pietro Giuseppe Frè and Alexander Fedotov cover both classical symmetry—as seen in crystallography—and the mathematical concepts used in supersymmetric field theories. After a basic introduction to group theory, they discuss Lie algebras and basic notions of differential geometry. Mathematica allows readers to develop grouptheoretical constructions.
In this undergraduate textbook, Mark A. Cunningham discusses the nature of the microscopic universe from a modern perspective, based on Einstein’s notions of relativity and Noether’s proof of the emergence of conservation laws from symmetries of the equations of motion. These ideas drove the development of the Standard Model of particle physics and subsequent attempts to define a unified (string) theory. The second half of the book explores various aspects of manybody physics, ranging from chemical systems to plasmas to black holes. Cunningham makes extensive use of Mathematica to enable students to explore the meanings of different equations in a graphical manner. Students will gain an appreciation of the current state of physical theory in preparation for more detailed, advanced study as upperclassmen.
Author Gautam Dasgupta presents a highly original treatment of the fundamentals of FEM, developed using computer algebra, based on undergraduatelevel engineering mathematics and the mechanics of solids. The book is divided into two distinct parts of nine chapters and seven appendices. The appendices give a short introduction to Mathematica, followed by truss analysis using symbolic codes that could be potentially used in all FEM problems to assemble element matrices and solve for all unknowns. All Mathematica codes for theoretical formulations and graphics are included, with extensive numerical examples.
Originally written in Greek, Euclidean Economics is now updated and translated to English. Author Sophocles Michaelides contends that economics can be studied and utilized on the basis of a minimum of fundamental hypotheses and the general laws of mathematics. He uses the Wolfram Language exclusively in the book’s calculations. Part two of the book includes all the notebooks in printed form.
Author Mikio Tohyama is a engineer who plays the piano every day, so he is attuned to sound on both scientific and experiential levels. He uses the Wolfram Language extensively to generate figures illustrating the nature of sound, focusing on the characteristics of sound waves in the context of time structures. This timedomain approach provides an informative and intuitively understandable description of various acoustic topics, such as sound waves traveling in an acoustic tube or in other media where spectral or modal analysis can be intensively performed.
Authors Cliff Hastings, Kelvin Mischo and Michael Morrison have updated the book hailed as “a muchneeded compliment to existing support material for Mathematica… that puts to rest, once and for all, the claim that Mathematica is difficult and not accessible to the school and college levels of our computational universe” (Fred Szabo at Concordia University in Montreal, author of Actuaries’ Survival Guide). And now this second edition of the definitive guide to the Wolfram Language is available in Japanese.
Stephen Wolfram’s An Elementary Introduction to the Wolfram Language, now in its second edition, remains the premium gateway for anyone interested in learning programming and computational thinking through the Wolfram Language.
While available in print from all major booksellers, the complete text is also available free online, and forms the basis of a newly launched fully interactive course through Wolfram U. Wolfram U expands educational opportunities with free courses and video classes on everything from data science and statistics to machine learning and image processing. Teachers seeking to incorporate coding in the classroom should also consider Stephen Wolfram’s blog post, “Machine Learning for Middle Schoolers,” which includes an overview of the book.
An Elementary Introduction to the Wolfram Language is available in English, Chinese and Korean, and is coming soon in Spanish and Russian.
]]>Are you ever certain that somewhere in a text or set of texts, the answer to a pressing question is waiting to be found, but you don’t want to take the time to skim through thousands of words to find what you’re looking for? Well, soon the Wolfram Language will provide concise answers to your specific, factbased questions directed toward an unstructured collection of texts (with a technology very different from that of WolframAlpha, which is based on a carefully curated knowledgebase).
Let’s start with the essence of FindTextualAnswer. This feature, available in the upcoming release of the Wolfram Language, answers questions by quoting the most appropriate excerpts of a text that is presumed to contain the relevant information.
✕
FindTextualAnswer["Lake Titicaca is a large, deep lake in the Andes \ on the border of Bolivia and Peru. By volume of water and by surface \ area, it is the largest lake in South America", "Where is Titicaca?"] 
FindTextualAnswer can mine Wikipedia—a convenient source of continuously refreshed knowledge—for answers to your questions. Let’s use WikipediaData!
✕
bandArticle = WikipediaData["The Who"]; Snippet[bandArticle] 
✕
FindTextualAnswer[bandArticle, "Who founded the Who?"] 
FindTextualAnswer can yield several possible answers, the probabilities of those answers being correct and other properties that can help you understand the context of each answer:
✕
FindTextualAnswer[bandArticle, "Who founded the Who?", 3, {"Probability", "HighlightedSentence"}] // TableForm 
FindTextualAnswer can efficiently answer several questions about the same piece of text:
✕
text = "Even thermometers can't keep up with the plunging \ temperatures in Russia's remote Yakutia region, which hit minus 88.6 \ degrees Fahrenheit in some areas Tuesday. In Yakutia  a region of 1 \ million people about 3,300 miles east of Moscow  students routinely \ go to school even in minus 40 degrees. But school was cancelled \ Tuesday throughout the region and police ordered parents to keep \ their children inside. In the village of Oymyakon, one of the coldest inhabited places on \ earth, stateowned Russian television showed the mercury falling to \ the bottom of a thermometer that was only set up to measure down to \ minus 50 degrees. In 2013, Oymyakon recorded an alltime low of minus \ 98 Fahrenheit."; questions = {"What is the temperature in Yakutia?", "Name one of the coldest places on earth?", "When was the lowest temperature recorded in Oymyakon?", "Where is Yakutia?", "How many live in Yakutia?", "How far is Yakutia from Moscow?"}; Thread[questions > FindTextualAnswer[text, questions]] // Column 
Because FindTextualAnswer is based on statistical methods, asking the same question in different ways can provide different answers:
✕
cityArticle = WikipediaData["Brasília"]; Snippet[cityArticle] 
✕
questions = {"Brasilia was inaugurated when?", "When was Brasilia finally constructed?"}; FindTextualAnswer[cityArticle, questions, 3, {"Probability", "HighlightedSentence"}] 
The answers to similar questions found in different pieces of text can be merged and displayed nicely in a WordCloud:
✕
WordCloud[ Catenate[FindTextualAnswer[cityArticle, questions, 5, {"String", "Probability"}]], WordSpacings > {10, 4}, ColorFunction > "TemperatureMap"] 
Any specialized textual knowledge database can be given to FindTextualAnswer. It can be a set of local files, a URL, a textual resource in the Wolfram Data Repository, the result of a TextSearch or a combination of all of these:
✕
FindTextualAnswer[{File["ExampleData/USConstitution.txt"], WikipediaData[ "US Constitutional Law"]}, "which crimes are punished in the US \ Constitution?", 5] 
✕
FindTextualAnswer[texts, "which crimes are punished in the US \ Constitution?", 5] 
FindTextualAnswer is good, but not perfect. It can occasionally make silly, sometimes funny or inexplicable mistakes. You can see why it is confused here:
✕
question = "Who is Raoul?"; context = ResourceData["The Phantom of the Opera"]; FindTextualAnswer[context, question, 1, "HighlightedSentence"] // First 
We will keep on improving the underlying statistical model in the next versions.
FindTextualAnswer combines wellestablished techniques for information retrieval and stateoftheart deep learning techniques to find answers in a text.
If a significant number of paragraphs is given to FindTextualAnswer, it first selects the closest ones to the question. The distance is based on a term frequency–inverse term frequency (TFIDF) weighting of the matching terms, similar to the following lines of code:
✕
corpus = WikipediaData["Rhinoceros"]; passages = TextCases[corpus, "Sentence"]; 
✕
tfidf = FeatureExtraction[passages, "TFIDF"]; 
✕
question = "What are the horns of a rhinoceros made of?"; 
✕
TakeSmallestBy[passages, CosineDistance[tfidf@#, tfidf@question] &, 2] 
The TFIDFbased selection allows us to discard a good amount of irrelevant passages of text and spend more expensive computations to locate more precisely the answer(s) within a subset of candidate paragraphs:
✕
FindTextualAnswer[corpus, question, 2, "HighlightedSentence"] // Column 
This finer detection of the answer is done by a deep artificial neural network inspired by the cuttingedge deep learning techniques for question answering.
The neural network at the core of FindTextualAnswer was constructed, trained and deployed using the Wolfram neural network capabilities, primarily NetGraph, NetTrain and NetModel. The network is shown in the following directed graph of layers:
✕
net = NetModel["Wolfram FindTextualAnswer Net for WL 11.3"] 
✕
net = NetModel["Wolfram FindTextualAnswer Net for WL 11.3"] 
This network was first developed using the Stanford Question Answering Dataset (SQuAD) before using similarly labeled data from various domains and textual sources of knowledge, including the knowledgebase used to power WolframAlpha. Each training sample is a tuple with a paragraph of text, a question and the position of the answer in the paragraph. The current neural network takes as input a sequence of tokens, where each token can be a word, a punctuation mark or any symbol in the text. As the network was trained to output a unique span, the positions of the answers are given as start and end indices of these tokens, as in the tokenized version of the SQuAD dataset in the Wolfram Data Repository. A single training sample is shown here:
✕
ResourceData["SQuAD v1.1 Tokens Generated with WL", "TrainingData"][[All, 14478]] 
Several types of questions and answers are used to train; these can be classified as follows for the SQuAD dataset:
The following chart shows the different components of the network and their roles in understanding the input text in light of the question:
A first part encodes all the words in the context and the question in a semantic space. It mainly involves two deep learning goodies: (1) word embeddings that map each word in a semantic vector space, independent of the other words in the text; and (2) a bidirectional recurrent layer to get the semantics of the words in context.
The embeddings already capture a lot about the semantics—putting together synonyms and similar concepts—as illustrated below using FeatureSpacePlot to show the computed semantic relationships among fruits, animals and colors.
✕
animals = {"Alligator", "Bear", Sequence[ "Bird", "Bee", "Camel", "Zebra", "Crocodile", "Rhinoceros", "Giraffe", "Dolphin", "Duck", "Eagle", "Elephant", "Fish", "Fly"]}; colors = {"Blue", "White", Sequence[ "Yellow", "Purple", "Red", "Black", "Green", "Grey"]}; fruits = {"Apple", "Apricot", Sequence[ "Avocado", "Banana", "Blackberry", "Cherry", "Coconut", "Cranberry", "Grape", "Mango", "Melon", "Papaya", "Peach", "Pineapple", "Raspberry", "Strawberry", "Fig"]}; FeatureSpacePlot[ Join[animals, colors, fruits], FeatureExtractor > NetModel["GloVe 300Dimensional Word Vectors Trained on Wikipedia \ and Gigaword 5 Data"]] 
Word embeddings have been a key ingredient in natural language processing since 2013. Several embeddings are available in the Wolfram Neural Net Repository. The current model in FindTextualAnswer is primarily based on GloVe 300Dimensional Word Vectors Trained on Wikipedia and Gigaword 5 Data.
A second part of the neural network produces a higherlevel representation that takes into account the semantic matching between different passages of the text and the question. This part uses yet another powerful deep learning ingredient, called attention, that is particularly suited for natural language processing and the processing of sequences in general. The attention mechanism assigns weights to all words and uses them to compute a weighted representation. Like most of the stateoftheart models of question answering, the neural network of FindTextualAnswer uses a twoway attention mechanism. The words of the question focus attention on the passage and the words of the passage focus attention on the question, meaning that the network exploits both a questionaware representation of the text and a contextaware representation of the question. This is similar to what you would do when answering a question about a text: first you read the question, then read the text with the question in mind (and possibly reinterpret the question), then focus on the relevant pieces of information in the text.
Let’s illustrate how encoding and attention work on a simple input example:
✕
question = "What colour are elephants?"; context = "Elephants have a grey or white skin."; 
The network is fed with the list of tokens from the context and the question:
✕
getTokens = StringSplit[#, {WhitespaceCharacter, x : PunctuationCharacter :> x}] &; input = <"Context" > getTokens@context, "Question" > getTokens@question, "WordMatch" > Join[{{0, 1, 1}}, ConstantArray[0, {7, 3}]]> 
Note that this input includes a vector "WordMatch" that indicates for each word of the context if it occurs in the question under a certain form. For instance, here the word “Elephants” is matched if we ignore the case. The goal of this tailored feature is to cope with outofvocabulary words, i.e. with words that are not in the word embeddings’ dictionary (their embedding will be a vector full of zeros).
The encoding of the text and the question are computed by two subparts of the full network. These intermediate representations can be extracted as follows:
✕
questionEncoded = net[input, NetPort["encode_question", "Output"]]; 
✕
questionEncoded = net[input, NetPort["encode_question", "Output"]]; 
✕
contextEncoded = net[input, NetPort["encode_context", "Output"]]; 
✕
contextEncoded = net[input, NetPort["encode_context", "Output"]]; 
Each encoding consists of one vector per word, and is therefore a sequence of vectors for a full text or question. These sequences of numbers are hardly interpretable per se, and would just be perceived as noise by an average human being. Yes, artificial neural networks are kind of black boxes.
The attention mechanism is based on a similarity matrix that is just the outer dot product of these two representations:
✕
outerProduct = Outer[Dot, questionEncoded, contextEncoded, 1]; 
This similarity matrix is normalized using a SoftmaxLayer. Each word of the question focuses attention on the text, with a row of weights that sum up to 1:
✕
outerProduct = Outer[Dot, questionEncoded, contextEncoded, 1]; 
Each word of the text also focuses attention on the question with a set of weights that are this time obtained by normalizing the columns:
✕
outerProduct = Outer[Dot, questionEncoded, contextEncoded, 1]; 
Finally, the network builds upon the joint contextquestion representation, again with recurrent layers aggregating evidence to produce a higherlevel internal representation. And finally, a last part of the network assigns probabilities for each possible selection of text. The outputs of the network are then two distributions of probabilities in the position of, respectively, the start and the end of the answer:
✕
netOutput = net[input]; probas = Flatten /@ KeyTake[netOutput, {"Start", "End"}]; ListPlot[probas, FrameTicks > {ticksContext, Automatic}, Filling > Axis, Joined > True, PlotTheme > "Web", PlotStyle > {Blue, Red}, PlotRange > {0, 1}] 
The most probable answer spans are then chosen using a beam search.
These posterior probabilities are based on the assumptions that: (1) the answer is in the context; and (2) there is a unique answer. Therefore, they are not suited to estimate the probability that the answer is right. This probability is computed differently, using a logistic regression on a few intermediate activations of the network at the start and end positions. These activations are accessible through some output NetPort of the network that we named "StartActivation" and "EndActivation":
✕
{startActivations, endActivations} = netOutput /@ {"StartActivation", "EndActivation"}; 
Logistic regression can be expressed as a shallow neural network with just one linear layer and a LogisticSigmoid function:
✕
scorer = NetModel["Wolfram FindTextualAnswer Scorer Net for WL 11.3"] 
In the current example, the positions of the answers “grey,” “white” and “grey or white” are given by:
✕
positions = <"grey or white" > {4, 6}, "grey" > {4, 4}, "white" > {6, 6}>; 
Then their probabilities can be obtained by accessing the intermediate activations at these positions and applying the logistic regression model:
✕
Map[scorer@ Join[startActivations[[First@#]], endActivations[[Last@#]]] &, positions] 
Now look at how the network takes into account some additional nuance in the input statement. With the word “sometimes,” the probability of the subsequent word “white” drops:
✕
context2 = "Elephants have a grey or sometimes white skin."; 
FindTextualAnswer is a promising achievement of deep learning in the Wolfram Language that mines knowledge in unstructured texts written in natural language. The approach is complementary to the principle of WolframAlpha, which consists of querying a structured knowledge database that is carefully curated, updated and tuned with a unique magical sauce. FindTextualAnswer is different, and enables you to use any personal or specialized unstructured text source. It can, for example, search for the answer to a question of yours in a long history of emails.
It’s been an exciting beginning to the new year here at Wolfram Research with the coming release of Version 11.3 of the Wolfram Language, a soft announcement of the Wolfram Neural Net Repository and our launch of multiparadigm data science.
As part of the new year, we’re also launching some new content in the Public Relations department. As you may have seen, each month we are highlighting the accomplishments of our members on Wolfram Community. We are also recapping news and events about Wolfram each month. So, in case you missed the latest, check out these news stories:
Taliesin Beynon and Sebastian Bodenstein in our Advanced Research Group recently authored a guest post for O’Reilly Media’s Ideas blog about the use of Apache MXNet in the Wolfram Language, providing a behindthescenes glimpse of a highlevel deep learning framework.
“The aim of this post will be threefold: to explain why MXNet was chosen as a back end, to show how the Wolfram Language neural net framework and MXNet can interoperate and, finally, to explain in some technical detail how Wolfram Language uses MXNet to train nets that take sequences of arbitrary length.”
The post details what went into implementing MXNet connectivity in the Wolfram Language as part of the framework for neural networks and deep learning. Read more.
Wolfram Research, NVIDIA and the National Center for Supercomputing Applications just announced breakthrough research in gravitational wave detection. Daniel George, Wolfram Summer School alum and a Wolfram intern, along with his coauthor Eliu Huerta, have published their work in Physics Letters B, outlining the use of deep learning for realtime gravitational wave discovery. Daniel used the Wolfram Language to build the deep learning framework called Deep Filtering. Read more.
Mental Floss published a piece about Theo Gray, cofounder of Wolfram Research and primary developer of the Wolfram Notebook interface way back in 1988. Theo is trained as a chemist and built the Periodic Table Table, which sits on the fifth floor here at Wolfram HQ. It is quite literally a table shaped like the periodic table, with slots for each element—even radioactive ones! Theo is also an accomplished author, and even has his hands in quilting at PaleGray Labs. Read more.
Last summer, we noticed something peculiar. A train station in Cambridge, UK, was getting some attention on Twitter for its unusual facade. Keen observers noticed that the train station appeared to be clad with Wolfram automata. When our CEO Stephen Wolfram caught wind of it, he did some investigating in the Wolfram Language and quickly discovered that “oh my gosh, it’s covered in rule 30s!” He even prototyped a cellular automata architectural panel generator for users to build their own designs. Read more.
For more updates, be sure to follow us on Twitter, and you can also check us out on Facebook.
]]>Some trees are planted in an orchard. What is the maximum possible number of distinct lines of three trees? In his 1821 book Rational Amusement for Winter Evenings, J. Jackson put it this way:
Fain would I plant a grove in rows
But how must I its form compose
With three trees in each row;
To have as many rows as trees;
Now tell me, artists, if you please:
’Tis all I want to know.
Those familiar with tictactoe, threeinarow might wonder how difficult this problem could be, but it’s actually been looked at by some of the most prominent mathematicians of the past and present. This essay presents many new solutions that haven’t been seen before, shows a general method for finding more solutions and points out where current best solutions are improvable.
Various classic problems in recreational mathematics are of this type:
Here is a graphic for the last problem, 11 trees with 16 lines of 3 trees. Subsets[points,{3}] collects all sets of 3 points. Abs[Det[Append[#,1]&/@#]] calculates the triangle area of each set. The sets with area 0 are the lines.
✕
Module[{points, lines}, points = {{1, 1}, {1, 1}, {1, 2 + Sqrt[5]}, {0, 1}, {0, 0}, {0, 1/2 (1 + Sqrt[5])}, {1, 1}, {1, 1}, {1, 2 + Sqrt[5]}, {(1/Sqrt[5]), 1 + 2/Sqrt[5]}, {1/Sqrt[ 5], 1 + 2/Sqrt[5]}}; lines = Select[Subsets[points, {3}], Abs[Det[Append[#, 1] & /@ #]] == 0 &]; Graphics[{EdgeForm[{Black, Thick}], Line[#] & /@ lines, White, Disk[#, .1] & /@ points}, ImageSize > 540]] 
This solution for 12 points matches the known limit of 19 lines, but uses simple integer coordinates and seems to be new. Lines are found with GatherBy and RowReduce, which quickly find a canonical line form for any 2 points in either 2D or 3D space.
✕
Module[{name, root, vals, points, lines, lines3, lines2g}, name = "12 Points in 19 Lines of Three"; points = {{0, 0}, {6, 6}, {6, 6}, {2, 6}, {2, 6}, {6, 6}, {6, 6}, {6, 0}, {6, 0}, {0, 3}, {0, 3}}; lines = Union[Flatten[#, 1]] & /@ GatherBy[Subsets[points, {2}], RowReduce[Append[#, 1] & /@ #] &]; lines3 = Select[lines, Length[#] == 3 &]; lines2g = Select[lines, Length[#] == 2 && (#[[2, 2]]  #[[1, 2]])/(#[[2, 1]]  #[[1, 1]]) == (3/2) &]; Text@Column[{name, Row[{"Point ", Style["\[FilledCircle]", Green, 18], " at infinity"}], Graphics[{Thick, EdgeForm[Thick], Line[Sort[#]] & /@ lines3, Green, InfiniteLine[#] & /@ lines2g, { White, Disk[#, .5] } & /@ points}, ImageSize > 400, PlotRange > {{7, 7}, {7, 7}} ]}, Alignment > Center]] 
This blog goes far beyond those old problems. Here’s how 27 points can make 109 lines of 3 points. If you’d like to see the bestknown solutions for 7 to 27 points, skip to the gallery of solutions at the end. For the math, code and methodology behind these solutions, keep reading.
✕
With[{n = 27}, Quiet@zerosumGraphic[ If[orchardsolutions[[n, 2]] > orchardsolutions[[n, 3]], orchardsolutions[[n, 6]], Quiet@zerotripsymm[orchardsolutions[[n, 4]], Floor[(n  1)/2]]], n, {260, 210} 2]] 
What is the behavior as the number of trees increases? MathWorld’s orchardplanting problem, Wikipedia’s orchardplanting problem and the OnLine Encyclopedia of Integer Sequences sequence A003035 list some of what is known. Let m be the number of lines containing exactly three points for a set of p points. In 1974, Burr, Grünbaum and Sloane (BGS) gave solutions for particular cases and proved the bounds:
Here’s a table.
✕
droppoints = 3; Style[Text@Grid[Transpose[ Drop[Prepend[ Transpose[{Range[7, 28], Drop[#[[2]] & /@ orchardsolutions, 6], {6, 7, 10, 12, 16, 19, 22, 26, 32, 37, 42, 48, 54, 60, 67, 73, 81, 88, 96, 104, 113, 121}, (Floor[# (#  3)/6] + 1) & /@ Range[7, 28], Min[{Floor[#/3 Floor[(#  1)/2]], Floor[(Binomial[#, 2]  Ceiling[3 #/7])/3]}] & /@ Range[7, 28], {2, 2, 3, 5, 6, 7, 9, 10, 12, 15, 16, 18, 20, 23, 24, 26, 28, 30, 32, "?", "?", "?"}, {2, 2, 3, 5, 6, 7, 9, 10, 12, 15, 16, 18, 28, 30, 31, 38, 40, 42, 50, "?", "?", "?"} }], {"points", "maximum known lines of three", "proven upper bound", "BGS lower bound", "BGS upper bound", "4orchard lower bound", "4orchard upper bound"}], droppoints]], Dividers > {{2 > Red}, {2 > Red, 4 > Blue, 6 > Blue}}], 12] 
Terence Tao and Ben Green recently proved that the maximum number of lines is the BGS lower bound most of the time (“On Sets Defining Few Ordinary Lines”), but they did not describe how to get the sporadic exceptions. Existing literature does not currently show the more complicated solutions. For this blog, I share a method for getting elegantlooking solutions for the threeorchard problem, as well as describing and demonstrating the power of a method for finding the sporadic solutions. Most of the embeddings shown in this blog are new, but they all match existing known records.
For a given number of points p, let q = ⌊ (p–1)/2⌋; select the 3subsets of {–q,–q+1,…,q} that have a sum of 0 (mod p). That gives ⌊ (p–3) p/6⌋+1 3subsets. Here are the triples from p=8 to p=14. This number of triples is the same as the lower bound for the orchard problem, which Tao and Green proved was the best solution most of the time.
✕
Text@Grid[Prepend[Table[With[{triples = Select[ Subsets[Range[Floor[(p  1)/2], Ceiling[(p  1)/2]], {3}], Mod[Total[#], p] == 0 &]}, {p, Length[triples], Row[Row[ Text@Style[ ToString[Abs[#]], {Red, Darker[Green], Blue}[[ If[# == p/2, 2, Sign[#] + 2]]], 25  p] & /@ #] & /@ triples, Spacer[1]]}], {p, 8, 14}], {" \!\(\* StyleBox[\"p\",\nFontSlant>\"Italic\"]\)\!\(\* StyleBox[\" \",\nFontSlant>\"Italic\"]\)", " lines ", Row[ {" triples with zero sum (mod \!\(\* StyleBox[\"p\",\nFontSlant>\"Italic\"]\)) with \!\(\* StyleBox[\"red\",\nFontColor>RGBColor[1, 0, 0]]\)\!\(\* StyleBox[\" \",\nFontColor>RGBColor[1, 0, 0]]\)\!\(\* StyleBox[\"negative\",\nFontColor>RGBColor[1, 0, 0]]\), \!\(\* StyleBox[\"green\",\nFontColor>RGBColor[0, 1, 0]]\)\!\(\* StyleBox[\" \",\nFontColor>RGBColor[0, 1, 0]]\)\!\(\* StyleBox[\"zero\",\nFontColor>RGBColor[0, 1, 0]]\) and \!\(\* StyleBox[\"blue\",\nFontColor>RGBColor[0, 0, 1]]\)\!\(\* StyleBox[\" \",\nFontColor>RGBColor[0, 0, 1]]\)\!\(\* StyleBox[\"positive\",\nFontColor>RGBColor[0, 0, 1]]\)"}]}], Spacings > {0, 0}, Frame > All] 
Here’s a clearer graphic for how this works. Pick three different numbers from –8 to 8 that have a sum of zero. You will find that those numbers are on a straight line. The method used to place these numbers will come later.
That’s not the maximum possible number of lines. By moving these points some, the triples that have a modulus17 sum of zero can also be lines. One example is 4 + 6 + 7 = 17.
✕
With[{n = 17}, Quiet@zerosumGraphic[ If[orchardsolutions[[n, 2]] > orchardsolutions[[n, 3]], orchardsolutions[[n, 6]], Quiet@zerotripsymm[orchardsolutions[[n, 4]], Floor[(n  1)/2]]], n, {260, 210} 2]] 
Does this method always give the best solution? No—there are at least four sporadic exceptions. Whether any other sporadic solutions exist is not known.
✕
Grid[Partition[ zerosumGraphic[orchardsolutions[[#, 6]], #, {260, 210}] & /@ {7, 11, 16, 19}, 2]] 
There are also problems with more than three in a row.
Fifteen lines of four points using 15 points is simple enough. RowReduce is used to collect lines, with RootReduce added to make sure everything is in a canonical form.
✕
Module[{pts, lines}, pts = Append[ Join[RootReduce[Table[{Sin[2 Pi n/5], Cos[2 Pi n/5]}, {n, 0, 4}]], RootReduce[ 1/2 (3  Sqrt[5]) Table[{Sin[2 Pi n/5], Cos[2 Pi n/5]}, {n, 0, 4}]], RootReduce[(1/2 (3  Sqrt[5]))^2 Table[{Sin[2 Pi n/5], Cos[2 Pi n/5]}, {n, 0, 4}]]], {0, 0}]; lines = Union[Flatten[#, 1]] & /@ Select[SplitBy[ SortBy[Subsets[pts, {2}], RootReduce[RowReduce[Append[#, 1] & /@ #]] &], RootReduce[RowReduce[Append[#, 1] & /@ #]] &], Length[#] > 3 &]; Graphics[{Thick, Line /@ lines, EdgeForm[{Black, Thick}], White, Disk[#, .05] & /@ pts}, ImageSize > 520]] 
Eighteen points in 18 lines of 4 points is harder, since it seems to require 3 points at infinity. When lines are parallel, projective geometers say that the lines intersect at infinity. With 4 points on each line and each line through 4 points, this is a 4configuration.
✕
Module[{config18, linesconfig18, inf}, config18 = {{0, Root[9  141 #1^2 + #1^4 &, 1]}, {1/4 (21  9 Sqrt[5]), Root[9  564 #1^2 + 16 #1^4 &, 4]}, {1/4 (21 + 9 Sqrt[5]), Root[9  564 #1^2 + 16 #1^4 &, 4]}, {0, 2 Sqrt[3]}, {3, Sqrt[ 3]}, {3, Sqrt[3]}, {0, Sqrt[3]}, {3/ 2, (Sqrt[3]/2)}, {(3/2), (Sqrt[3]/2)}, {1/4 (3 + 3 Sqrt[5]), Root[9  564 #1^2 + 16 #1^4 &, 4]}, {1/4 (9 + 3 Sqrt[5]), Root[225  420 #1^2 + 16 #1^4 &, 1]}, {1/2 (6  3 Sqrt[5]), ( Sqrt[3]/2)}, {0, Root[144  564 #1^2 + #1^4 &, 4]}, {1/2 (21 + 9 Sqrt[5]), Root[9  141 #1^2 + #1^4 &, 1]}, {1/2 (21  9 Sqrt[5]), Root[9  141 #1^2 + #1^4 &, 1]}}; linesconfig18 = SplitBy[SortBy[Union[Flatten[First[#], 1]] & /@ (Transpose /@ Select[ SplitBy[ SortBy[{#, RootReduce[RowReduce[Append[#, 1] & /@ #]]} & /@ Subsets[config18, {2}], Last], Last], Length[#] > 1 &]), Length], Length]; inf = Select[ SplitBy[SortBy[linesconfig18[[1]], RootReduce[slope[Take[#, 2]]] &], RootReduce[slope[Take[#, 2]]] &], Length[#] > 3 &]; Graphics[{Thick, Line /@ linesconfig18[[2]], Red, InfiniteLine[Take[#, 2]] & /@ inf[[1]], Green, InfiniteLine[Take[#, 2]] & /@ inf[[2]], Blue, InfiniteLine[Take[#, 2]] & /@ inf[[3]], EdgeForm[Black], White, Disk[#, .7] & /@ config18}, ImageSize > {520, 460}]] 
If you do not like points at infinity, arrange 3 heptagons of 7 points to make a 4configuration of 21 lines through 21 points. That isn’t the record, since it is possible to make at least 24 lines of 4 with 21 points.
✕
Module[{pts, lines}, 21 linepts = 4 {{0, b}, {0, (b c)/( a  c)}, {2 a, b}, {0, ((b c)/(2 a + c))}, {0, (b c)/( 3 a  c)}, {a, b}, {a, b}, {c, 0}, {(c/3), 0}, {c/3, 0}, {c, 0}, {((3 a c)/(3 a  2 c)), (2 b c)/(3 a  2 c)}, {( a c)/(3 a  2 c), (2 b c)/(3 a  2 c)}, {(3 a c)/(3 a  2 c), ( 2 b c)/(3 a  2 c)}, {(a c)/(5 a  2 c), (2 b c)/( 5 a  2 c)}, {(a c)/(5 a + 2 c), (2 b c)/(5 a  2 c)}, {( a c)/(3 a + 2 c), (2 b c)/( 3 a  2 c)}, {((a c)/(a + 2 c)), ((2 b c)/(a + 2 c))}, {( a c)/(a + 2 c), ((2 b c)/(a + 2 c))}, {((a c)/( 3 a + 2 c)), ((2 b c)/(3 a + 2 c))}, {(a c)/( 3 a + 2 c), ((2 b c)/(3 a + 2 c))}} /. {a > 2, c > 1, b > 1}; lines = Union[Flatten[#, 1]] & /@ Select[SplitBy[ SortBy[Subsets[pts, {2}], RowReduce[Append[#, 1] & /@ #] &], RowReduce[Append[#, 1] & /@ #] &], Length[#] > 3 &]; Graphics[{Line /@ lines, EdgeForm[Black], White, Disk[#, .3] & /@ pts}, ImageSize > 500]] 
The bestknown solution for 25 points has 32 lines, but this solution seems weak due to the low contribution made by the last 3 points. Progressively remove points labeled 25, 24, 23 (near the bottom) to see the bestknown solutions that produce 30, 28, 26 lines.
✕
Module[{pts, lines}, pts = {{0, 1/4}, {0, 3/4}, {1, 1/2}, {1, 1/2}, {1, 1}, {1, 1}, {0, 0}, {0, 3/8}, {(1/3), 1/3}, {1/3, 1/3}, {(1/3), 1/6}, {1/3, 1/ 6}, {(1/5), 2/5}, {1/5, 2/5}, {(1/5), 1/2}, {1/5, 1/ 2}, {1, (1/2)}, {1, (1/2)}, {1, 1}, {1, 1}, {(1/3), 2/ 3}, {1/3, 2/3}, {(1/3), (2/3)}, {1/3, (2/3)}, {9/5, (6/5)}}; lines = SplitBy[SortBy[ (Union[Flatten[#, 1]] & /@ SplitBy[SortBy[Subsets[pts, {2}], RowReduce[Append[#, 1] & /@ #] &], RowReduce[Append[#, 1] & /@ #] &]), Length], Length]; Graphics[{InfiniteLine[Take[#, 2]] & /@ lines[[3]], White, EdgeForm[Black], Table[{Disk[pts[[n]], .04], Black, Style[Text[n, pts[[n]]], 8]}, {n, 1, Length[pts]}] & /@ pts, Black}, ImageSize > {520}]] 
The 27 lines in space are, of course, the Clebsch surface. There are 12 points of intersection not shown, and some lines have 9 points of intersection.
✕
Module[{lines27, clebschpoints}, lines27 = Transpose /@ Flatten[Join[Table[RotateRight[#, n], {n, 0, 2}] & /@ {{{(1/3), (1/3)}, {1, 1}, {1, 1}}, {{0, 0}, {1, (2/3)}, {(2/3), 1}}, {{1/3, 1/ 3}, {1, (1/3)}, {(1/3), 1}}, {{0, 0}, {4/ 9, (2/9)}, {1, 1}}, {{0, 0}, {1, 1}, {4/9, (2/9)}}}, Permutations[#] & /@ {{{30, 30}, {35  19 Sqrt[5], 25 + 17 Sqrt[5]}, {5 + 3 Sqrt[5], 5  9 Sqrt[5]}}/ 30, {{6, 6}, {3 + 2 Sqrt[5], 6  Sqrt[5]}, {7 + 4 Sqrt[5], 8  5 Sqrt[5]}}/6}], 1]; clebschpoints = Union[RootReduce[Flatten[With[ {sol = Solve[e #[[1, 1]] + (1  e) #[[1, 2]] == f #[[2, 1]] + (1  f) #[[2, 2]]]}, If[Length[sol] > 0, (e #[[1, 1]] + (1  e) #[[1, 2]]) /. sol, Sequence @@ {} ]] & /@ Subsets[lines27, {2}], 1]]]; Graphics3D[{{ Sphere[#, .04] & /@ Select[clebschpoints, Norm[#] < 1 &]}, Tube[#, .02] & /@ lines27, Opacity[.4], ContourPlot3D[ 81 (x^3 + y^3 + z^3)  189 (x^2 y + x^2 z + x y^2 + x z^2 + y^2 z + y z^2) + 54 x y z + 126 (x y + x z + y z)  9 (x^2 + y^2 + z^2)  9 (x + y + z) + 1 == 0, {x, 1, 1}, {y, 1, 1}, {z, 1, 1}, Boxed > False][[1]]}, Boxed > False, SphericalRegion > True, ImageSize > 520, ViewAngle > Pi/8]] 
I’m not sure that’s optimal, since I managed to arrange 149 points in 241 lines of 5 points.
✕
Module[{majorLines, tetrahedral, base, points, lines}, majorLines[pts_] := ((Drop[#1, 1] &) /@ #1 &) /@ Select[(Union[Flatten[#1, 1]] &) /@ SplitBy[SortBy[Subsets[(Append[#1, 1] &) /@ pts, {2}], RowReduce], RowReduce], Length[#1] > 4 &]; tetrahedral[{a_, b_, c_}] := Union[{{a, b, c}, {a, b, c}, {b, c, a}, {b, c, a}, {c, a, b}, {c, a, b}, {c, a, b}, {c, a, b}, {b, c, a}, {b, c, a}, {a, b, c}, {a, b, c}}]; base = {{0, 0, 0}, {180, 180, 180}, {252, 252, 252}, {420, 420, 420}, {1260, 1260, 1260}, {0, 0, 420}, {0, 0, 1260}, {0, 180, 360}, {0, 315, 315}, {0, 360, 180}, {0, 420, 840}, {0, 630, 630}, {0, 840, 420}, {140, 140, 420}, {180, 180, 540}, {252, 252, 756}, {420, 420, 1260}}; points = Union[Flatten[tetrahedral[#] & /@ base, 1]]; lines = majorLines[points]; Graphics3D[{Sphere[#, 50] & /@ points, Tube[Sort[#], 10] & /@ Select[lines, Length[#] == 5 &]}, Boxed > False, ImageSize > {500, 460}]] 
The 3D display is based on the following 2D solution, which has 25 points in 18 lines of 5 points. The numbers are barycentric coordinates. To use point 231, separate the digits (2,3,1), divide by the total (2/6,3/6,1/6) and simplify (1/3,1/2,1/6). If the outer triangle has area 1, the point 231 extended to the outer edges will make triangles of area (1/3,1/2,1/6).
✕
Module[{peggpoints, elkpoints, elklines, linecoords}, peggpoints = Sort[#/Total[#] & /@ Flatten[(Permutations /@ {{0, 0, 1}, {0, 1, 1}, {0, 1, 2}, {0, 4, 5}, {1, 1, 2}, {1, 2, 2}, {1, 2, 3}, {1, 2, 6}, {1, 4, 4}, {2, 2, 3}, {2, 2, 5}, {2, 3, 4}, {2, 3, 5}, {2, 5, 5}, {2, 6, 7}, {4, 5, 6}}), 1]]; elkpoints = Sort[#/Total[#] & /@ Flatten[(Permutations /@ {{1, 1, 1}, {0, 0, 1}, {1, 2, 3}, {1, 1, 2}, {0, 1, 1}, {1, 2, 2}, {0, 1, 2}}), 1]]; elklines = First /@ Select[ SortBy[Tally[BaryLiner[#] & /@ Subsets[elkpoints, {2}]], Last], Last[#] > 4 &]; linecoords = Table[FromBarycentrics[{#[[1]], #[[2]]}, tri] & /@ Select[elkpoints, elklines[[n]].# == 0 &], {n, 1, 18}]; Graphics[{AbsoluteThickness[3], Line /@ linecoords, With[{coord = FromBarycentrics[{#[[1]], #[[2]]}, tri]}, {Black, Disk[coord, .12], White, Disk[coord, .105], Black, Style[Text[StringJoin[ToString /@ (# (Max[Denominator[#]]))], coord], 14, Bold]}] & /@ elkpoints}, ImageSize > {520, 450}]] 
A further exploration of this is at Extreme Orchards for Gardner. There, I ask if a selfdual configuration exists where the point set is identical to the line set. I managed to find the following 24point 3configuration. The numbers represent {0,2,–1}, with blue = positive, red = negative and green = zero. In barycentric coordinates, a line {a,b,c} is on point {d,e,f} if the dot product {a,b,c}.{d,e,f}==0. For point {0,2,–1}, the lines {{–1,1,2},{–1,2,4},{0,1,2}} go through that point. Similarly, for line {0,2,–1}, the points {{–1,1,2},{–1,2,4},{0,1,2}} are on that line. The set of 24 points is identical to the set of 24 lines.
✕
FromBarycentrics[{m_, n_, o_}, {{x1_, y1_}, {x2_, y2_}, {x3_, y3_}}] := {m*x1 + n*x2 + (1  m  n)*x3, m*y1 + n*y2 + (1  m  n)*y3}; tri = Reverse[{{Sqrt[3]/2, (1/2)}, {0, 1}, {(Sqrt[3]/2), (1/2)}}]; With[{full = Union[Flatten[{#, RotateRight[#, 1], RotateLeft[#, 1]} & /@ {{1, 0, 2}, {1, 1, 2}, {1, 2, 0}, {1, 2, 1}, {1, 2, 4}, {1, 4, 2}, {0, 1, 2}, {0, 2, 1}}, 1]]}, Graphics[{EdgeForm[Black], Tooltip[Line[#[[2]]], Style[Row[ Switch[Sign[#], 1, Style[ToString[Abs[#]], Red], 0, Style[ToString[Abs[#]], Darker[Green]], 1, Style[ToString[Abs[#]], Blue]] & /@ #[[1]]], 16, Bold]] & /@ Table[{full[[k]], Sort[FromBarycentrics[#/Total[#], tri] & /@ Select[full, full[[k]].# == 0 &]]}, {k, 1, Length[full]}], White, {Disk[FromBarycentrics[#/Total[#], tri], .15], Black, Style[Text[ Row[Switch[Sign[#], 1, Style[ToString[Abs[#]], Red], 0, Style[ToString[Abs[#]], Darker[Green]], 1, Style[ToString[Abs[#]], Blue]] & /@ #], FromBarycentrics[#/Total[#], tri]], 14, Bold]} & /@ full}, ImageSize > 520]] 
With a longer computer run, I found an order27, selfdual 4configuration where the points and lines have the same set of barycentric coordinates.
✕
With[{full = Union[Flatten[{#, RotateRight[#, 1], RotateLeft[#, 1]} & /@ {{2, 1, 4}, {2, 1, 3}, {1, 1, 1}, {1, 2, 0}, {1, 2, 1}, {1, 3, 2}, {1, 4, 2}, {0, 1, 2}, {1, 1, 2}}, 1]]}, Graphics[{EdgeForm[Black], Tooltip[Line[#[[2]]], Style[Row[ Switch[Sign[#], 1, Style[ToString[Abs[#]], Red], 0, Style[ToString[Abs[#]], Darker[Green]], 1, Style[ToString[Abs[#]], Blue]] & /@ #[[1]]], 16, Bold]] & /@ Table[{full[[k]], Sort[FromBarycentrics[#/Total[#], tri] & /@ Select[full, full[[k]].# == 0 &]]}, {k, 1, Length[full]}], White, {Tooltip[Disk[FromBarycentrics[#/Total[#], tri], .08], Style[Row[ Switch[Sign[#], 1, Style[ToString[Abs[#]], Red], 0, Style[ToString[Abs[#]], Darker[Green]], 1, Style[ToString[Abs[#]], Blue]] & /@ #], 16, Bold]]} & /@ full}, ImageSize > 520]] 
And now back to the mathematics of threeinarow, frequently known as elliptic curve theory, but I’ll mostly be veering into geometry.
In the cubic curve given by y = x^{3}, all the triples from {–7,–6,…,7} that sum to zero happen to be on a straight line. The Table values are adjusted so that the aspect ratio will be reasonable.
✕
simplecubic = Table[{x/7, x^3 /343}, {x, 7, 7}]; Graphics[{Cyan, Line[Sort[#]] & /@ Select[Subsets[simplecubic, {3}], Abs[Det[Append[#, 1] & /@ #]] == 0 &], {Black, Disk[#, .07], White, Disk[#, .06], Black, Style[Text[7 #[[1]], #], 16] } & /@ simplecubic}, ImageSize > 520] 
For example, (2,3,–5) has a zerosum. For the cubic curve, those numbers are at coordinates (2,8), (3,27) and (–5,–125), which are on a line. The triple (–∛2, –∛3, ∛2 + ∛3) also sums to zero and the corresponding points also lie on a straight line, but ignore that: restrict the coordinates to integers. With the curve y = x^{3}, all of the integers can be plotted. Any triple of integers that sums to zero is on a straight line.
✕
TraditionalForm[ Row[{Det[MatrixForm[{{2, 8, 1}, {3, 27, 1}, {5, 125, 1}}]], " = ", Det[{{2, 8, 1}, {3, 27, 1}, {5, 125, 1}}]}]] 
We can use the concept behind the cubic curve to make a rotationally symmetric zerosum geometry around 0. Let blue, red and green represent positive, negative and zero values. Start with:
To place the values 3 and 4, variables e and f are needed. The positions of all subsequent points up to infinity are forced.
Note that e and f should not be 0 or 1, since that would cause all subsequent points to overlap on the first five points.
Instead of building around 0, values can instead be reflected in the y = x diagonal to make a mirrorsymmetric zerosum geometry.
Skew symmetry is also possible with the addition of variables (m,n).
The six variables (a,b,c,d,e,f) completely determine as many points as you like with rotational symmetry about (0,0) or mirror symmetry about the line y = x. Adding the variables (m,n) allows for a skew symmetry where the lines and intersect at (0,0). In the Manipulate, move to change (a,b) and to change (c,d). Move horizontally to change e and vertically to change f. For skew symmetry, move to change the placements of and .
✕
Manipulate[ Module[{ halfpoints, triples, initialpoints, pts2, candidate2}, halfpoints = Ceiling[(numberofpoints  1)/2]; triples = Select[Subsets[Range[halfpoints, halfpoints], {3}], Total[#] == 0 &]; initialpoints = rotational /. Thread[{a, b, c, d, e, f} > Flatten[{ab, cd, ef}]]; If[symmetry == "mirror", initialpoints = mirror /. Thread[{a, b, c, d, e, f} > Flatten[{ab, cd, ef}]]]; If[symmetry == "skew", initialpoints = skew /. Thread[{a, b, c, d, e, f, m, n} > Flatten[{ab, cd, ef, mn}]]]; pts2 = Join[initialpoints, Table[{{0, 0}, {0, 0}}, {46}]]; Do[pts2[[ index]] = (LineIntersectionPoint33[{{pts2[[1, #]], pts2[[index  1, #]]}, {pts2[[2, #]], pts2[[index  2, #]]}}] & /@ {2, 1}), {index, 5, 50}]; If[showcurve, candidate2 = NinePointCubic2[First /@ Take[pts2, 9]], Sequence @@ {}]; Graphics[{ EdgeForm[Black], If[showcurve, ContourPlot[Evaluate[{candidate2 == 0}], {x, 3, 3}, {y, 3, 3}, PlotPoints > 15][[1]], Sequence @@ {}], If[showlines, If[symmetry == "mirror", {Black, Line[pts2[[Abs[#], (3  Sign[#])/2 ]] & /@ #] & /@ Select[triples, Not[MemberQ[#, 0]] &], Green, InfiniteLine[ pts2[[Abs[#], (3  Sign[#])/ 2 ]] & /@ #] & /@ (Drop[#, {2}] & /@ Select[triples, MemberQ[#, 0] &])}, {Black, Line[If[# == 0, {0, 0}, pts2[[Abs[#], (3  Sign[#])/2 ]]] & /@ #] & /@ triples}], Sequence @@ {}], If[extrapoints > 0, Table[{White, Disk[pts2[[n, index]], .03]}, {n, halfpoints + 1, halfpoints + extrapoints}, {index, 1, 2}], Sequence @@ {}], Table[{White, Disk[pts2[[n, index]], .08], {Blue, Red}[[index]], Style[Text[n, pts2[[n, index]]] , 12]}, {n, halfpoints, 1, 1}, {index, 1, 2}], If[symmetry != "mirror", {White, Disk[{0, 0}, .08], Green, Style[Text[0, {0, 0}] , 12]}, Sequence @@ {}], Inset[\!\(\* GraphicsBox[ {RGBColor[1, 1, 0], EdgeForm[{GrayLevel[0], Thickness[Large]}], DiskBox[{0, 0}], {RGBColor[0, 0, 1], StyleBox[InsetBox["\<\"1\"\>", {0.05, 0.05}], StripOnInput>False, FontSize>18, FontWeight>Bold]}}, ImageSize>{24, 24}]\), ab], Inset[\!\(\* GraphicsBox[ {RGBColor[1, 1, 0], EdgeForm[{GrayLevel[0], Thickness[Large]}], DiskBox[{0, 0}], {RGBColor[0, 0, 1], StyleBox[InsetBox["\<\"2\"\>", {0.07, 0.05}], StripOnInput>False, FontSize>18, FontWeight>Bold]}}, ImageSize>{24, 24}]\), cd], Inset[\!\(\* GraphicsBox[ {RGBColor[0, 1, 0], EdgeForm[{GrayLevel[0], Thickness[Large]}], DiskBox[{0, 0}], {GrayLevel[0], StyleBox[InsetBox["\<\"ef\"\>", {0, 0}], StripOnInput>False, FontSize>9]}}, ImageSize>{21, 21}]\), ef], If[symmetry == "skew", Inset[\!\(\* GraphicsBox[ {RGBColor[1, 0, 1], EdgeForm[{GrayLevel[0], Thickness[Large]}], DiskBox[{0, 0}], {GrayLevel[0], StyleBox[InsetBox["\<\"mn\"\>", {0, 0}], StripOnInput>False, FontSize>9]}}, ImageSize>{21, 21}]\), mn], Sequence @@ {}]}, ImageSize > {380, 480}, PlotRange > Dynamic[(3/2)^zoom {{2.8, 2.8}  zx/5, {2.5, 2.5}  zy/5}]]], {{ab, {2, 2}}, {2.4, 2.4}, {2.4, 2.4}, ControlType > Locator, Appearance > None}, {{cd, {2, 2}}, {2.4, 2.4}, {2.4, 2.4}, ControlType > Locator, Appearance > None}, {{ef, {.7, .13}}, {2.4, 2.4}, {2.4, 2.4}, ControlType > Locator, Appearance > None}, {{mn, {2.00, 0.5}}, {2.4, 2.4}, {2.4, 2.4}, ControlType > Locator, Appearance > None}, "symmetry", Row[{Control@{{symmetry, "rotational", ""}, {"rotational", "mirror", "skew"}, ControlType > PopupMenu}}], "", "points shown", {{numberofpoints, 15, ""}, 5, 30, 2, ControlType > PopupMenu}, "", "extra points", {{extrapoints, 0, ""}, 0, 20, 1, ControlType > PopupMenu}, "", "move zero", Row[{Control@{{zx, 0, ""}, 10, 10, 1, ControlType > PopupMenu}, " 5", Style["x", Italic]}], Row[{Control@{{zy, 0, ""}, 10, 10, 1, ControlType > PopupMenu}, " 5", Style["y", Italic]}], "", "zoom exponent", {{zoom, 0, ""}, 2, 3, 1, ControlType > PopupMenu}, "", "show these", Row[{Control@{{showlines, True, ""}, {True, False}}, "lines"}], Row[{Control@{{showcurve, False, ""}, {True, False}}, "curve"}], TrackedSymbols :> {ab, cd, ef, mn, zx, zy, symmetry, numberofpoints, extrapoints, zoom}, ControlPlacement > Left, Initialization :> ( Clear[a]; Clear[b]; Clear[c]; Clear[d]; Clear[e]; Clear[f]; Clear[m]; Clear[n]; NinePointCubic2[pts3_] := Module[{makeRow2, cubic2, poly2, coeff2, nonzero, candidate}, If[Min[ Total[Abs[RowReduce[#][[3]]]] & /@ Subsets[Append[#, 1] & /@ pts3, {4}]] > 0, makeRow2[{x_, y_}] := {1, x, x^2, x^3, y, y x, y x^2, y^2, y^2 x, y^3}; cubic2[x_, y_][p_] := Det[makeRow2 /@ Join[{{x, y}}, p]]; poly2 = cubic2[x, y][pts3]; coeff2 = Flatten[CoefficientList[poly2, {y, x}]]; nonzero = First[Select[coeff2, Abs[#] > 0 &]]; candidate = Expand[Simplify[ poly2/nonzero]]; If[Length[FactorList[candidate]] > 2, "degenerate", candidate], "degenerate"]]; LineIntersectionPoint33[{{a_, b_}, {c_, d_}}] := ( Det[{a, b}] (c  d)  Det[{c, d}] (a  b))/Det[{a  b, c  d}]; skew = {{{a, b}, {a m, b m}}, {{c, d}, {c n, d n}}, {{a e m  c (1 + e) n, b e m  d (1 + e) n}, {( a e m + c n  c e n)/(e m + n  e n), (b e m + d n  d e n)/( e m + n  e n)}}, {{a f m  ((1 + f) (a e m  c (1 + e) n))/( e (m  n) + n), b f m  ((1 + f) (b e m  d (1 + e) n))/(e (m  n) + n)}, {( c (1 + e) (1 + f) n + a m (e + e f (1 + m  n) + f n))/( 1 + f (1 + e m (m  n) + m n)), ( d (1 + e) (1 + f) n + b m (e + e f (1 + m  n) + f n))/( 1 + f (1 + e m (m  n) + m n))}}}; rotational = {#, #} & /@ {{a, b}, {c, d}, {c (1 + e)  a e, d (1 + e)  b e}, {c (1 + e) (1 + f) + a (e  (1 + e) f), d (1 + e) (1 + f) + b (e  (1 + e) f)}}; mirror = {#, Reverse[#]} & /@ {{a, b}, {c, d}, {d (1  e) + b e, c (1  e) + a e}, {(c (1  e) + a e) (1  f) + b f, (d (1  e) + b e) (1  f) + a f}};), SynchronousInitialization > False, SaveDefinitions > True] 
In the rotationally symmetric construction, point 7 can be derived by finding the intersection of lines , and .
✕
TraditionalForm[ FullSimplify[{h zerosumgeometrysymmetric[[2, 2]] + (1  h) zerosumgeometrysymmetric[[5, 2]] } /. Solve[h zerosumgeometrysymmetric[[2, 2]] + (1  h) zerosumgeometrysymmetric[[5, 2]] == j zerosumgeometrysymmetric[[3, 2]] + (1  j) zerosumgeometrysymmetric[[4, 2]] , {h, j}][[ 1]]][[1]]] 
The simple cubic had 15 points 7 to 7 producing 25 lines. That falls short of the record 31 lines. Is there a way to get 6 more lines? Notice 6 triples with a sum of 0 modulus 15:
✕
Select[Subsets[Range[7, 7], {3}], Abs[Total[#]] == 15 &] 
We can build up the triangle area matrices for those sets of points. If the determinant is zero, the points are on a straight line.
✕
matrices15 = Append[zerosumgeometrysymmetric[[#, 1]], 1] & /@ # & /@ {{2, 6, 7}, {3, 5, 7}, {4, 5, 6}}; Row[TraditionalForm@Style[MatrixForm[#]] & /@ (matrices15), Spacer[20]] 
Factor each determinant and hope to find a shared factor other than bc–ad, which puts all points on the same line. It turns out the determinants have –e + e^{2} + f – e f + f^{2} – e f^{2} + f^{3} as a shared factor.
✕
Column[FactorList[Numerator[Det[#]]] & /@ matrices15] 
Are there any nice solutions for –e + e^{2} + f – e f + f^{2} – e f^{2} + f^{3} = 0? Turns out letting e=Φ (the golden ratio) allows f = –1.
✕
Take[SortBy[Union[ Table[FindInstance[e + e^2 + f  e f + f^2  e f^2 + f^3 == 0 && e > 0 && f > ff, {e, f}, Reals], {ff, 2, 2, 1/15}]], LeafCount], 6] 
Here’s what happens with base points (a,b) = (1,1), (c,d) = (1,–1) and that value of (e,f).
✕
points15try = RootReduce[zerotripsymm[{1, 1, 1, 1, (1 + Sqrt[5])/2, 1}, 7]]; zerosumGraphic[points15try/5, 15, 1.5 {260, 210}] 
The solution’s convex hull is determined by points 4 and 2, so those points can be moved to make the solution more elegant.
✕
RootReduce[({{w, x}, {y, z}} /. Solve[{{{w, x}, {y, z}}.points15try[[2, 1]] == {1, 1}, {{w, x}, {y, z}}.points15try[[4, 1]] == {1, 1}}][[ 1]]).# & /@ {points15try[[1, 1]], points15try[[2, 1]]}] 
The values for (a,b,c,d) do not need to be exact, so we can find the nearest rational values.
✕
nearestRational[#, 20] & /@ Flatten[{{9  4 Sqrt[5], 5  2 Sqrt[5]}, {1, 1}}] 
That leads to an elegantlooking solution for the 15tree problem. There are 31 lines of 3 points, each a triple that sums to 0 (mod 15).
✕
points15 = RootReduce[zerotripsymm[{1/18, 9/17, 1, 1, (1 + Sqrt[5])/2, 1}, 7]]; zerosumGraphic[points15, 15, 1.5 {260, 210}] 
The 14point version leads to polynomial equation 2e – 2e^{2} – f + e f + e^{} – e f^{2} = 0, which has the nice solution {e>1/2,f> (–1+√17)/4}. A point at infinity is needed for an even number of points with this method.
✕
{{{1, 1}, {1, 1}}, {{1, 1}, {1, 1}}, {{1, 0}, {1, 0}}, {{1/2 (3  Sqrt[17]), 1/4 (1  Sqrt[17])}, {1/2 (3 + Sqrt[17]), 1/4 (1 + Sqrt[17])}}, {{1/4 (5  Sqrt[17]), 1/8 (1 + Sqrt[17])}, {1/4 (5 + Sqrt[17]), 1/8 (1  Sqrt[17])}}, {{1/8 (3 + 3 Sqrt[17]), 1/16 (7 + Sqrt[17])}, {1/8 (3  3 Sqrt[17]), 1/16 (7  Sqrt[17])}}} 
The solution on 15 points can be tweaked to give a match for the 16point, 37line solution in various ways. The is not particularly meaningful here. The last example is done with skew symmetry, even though it seems the same.
✕
Grid[Partition[{zerosumGraphic[ zerotripsymm[{5  2 Sqrt[5], 9  4 Sqrt[5], 1, 1, 1/2 (1 + Sqrt[5]), 1}, 7], 15, {260, 210}], zerosumGraphic[ zerotripsymm[{5  2 Sqrt[5], 9  4 Sqrt[5], 1, 1, 1/2 (1 + Sqrt[5]), 1}, 7], 16, {260, 210}], zerosumGraphic[ zerotripsymm[{1, 1, 1, 1, 3  Sqrt[5], 1/2 (3  Sqrt[5])}, 7], 16, {260, 210}], zerosumGraphic[ RootReduce[ zerotripskew[{0, 1  Sqrt[5], 3 + Sqrt[5], 3 + Sqrt[5], 1 + Sqrt[5], 1/2 (1 + Sqrt[5]), 1/2 (1 + Sqrt[5]), 1/2 (1 + Sqrt[5])}, 7]], 16, {260, 210}]}, 2]] 
The first solution is a special case of the 15solution with an abnormal amount of parallelism, enough to match the sporadic 16point solution. How did I find it?
Here are coordinates for the positive points up to 4 in the mirrorsymmetric and skewsymmetric cases. They quickly get more complicated.
✕
TraditionalForm@ Grid[Prepend[ Transpose[ Prepend[Transpose[First /@ Take[zerosumgeometrymirror, 4]], Range[1, 4]]], {"number", x, y}], Dividers > {{2 > Green}, {2 > Green}}] 
✕
TraditionalForm@ Grid[Prepend[ Transpose[ Prepend[Transpose[ Prepend[First /@ Take[zerosumgeometryskew, 4], {0, 0}]], Range[0, 4]]], {"number", x, y}], Dividers > {{2 > Blue}, {2 > Blue}}] 
Here are coordinates for the positive points up to 7 in the rotationally symmetric case. These are more tractable, so I focused on them.
✕
TraditionalForm@ Grid[Prepend[ Transpose[ Prepend[Transpose[ Prepend[First /@ Take[zerosumgeometrysymmetric, 7], {0, 0}]], Range[0, 7]]], {"number", x, y}], Dividers > {{2 > Red}, {2 > Red}}] 
For 14 and 15 points, the polynomials 2e – 2e^{2} – f + e f + e^{2} f – e f^{2} and –e + e^{2} + f – e f + f^{2} – e f^{2} + f^{3} appeared almost magically to solve the problem. Why did that happen? I have no idea, but it always seems to work. I’ll call these orchardplanting polynomials. It’s possible that they’ve never been used before to produce elegant solutions for this problem, because we would have seen them. Here are the next few orchardplanting polynomials. As a reminder, these are shared factors of the determinants generated by forcing triples modulo p to be lines.
✕
Monitor[TraditionalForm@Grid[Prepend[Table[ With[{subs = Select[Subsets[Range[Floor[n/2], Floor[n/2]], {3}], Mod[ Abs[Total[#]], n ] == 0 && Not[MemberQ[#, (n/2)]] &]}, {n, Length[subs], Select[subs, Min[#] > 0 && Max[#] < 13 && Max[#] < n/2 &], Last[SortBy[ Apply[Intersection, (First[Sort[FullSimplify[{#, #}]]] & /@ First /@ FactorList[Numerator[#]] & /@ Expand[Det[ Append[zerosumgeometrysymmetric[[#, 1]], 1] & /@ #] & /@ Select[subs, Min[#] > 0 && Max[#] < 13 && Max[#] < n/2 &]])], LeafCount]]}], {n, 11, 16}], {"trees", "lines", "triples needing modulus", "orchard planting polynomial"}]], n] 
Here is the major step for the solution of 14 trees. The item showing up in the numerator generated by (3,5,6) happens to be the denominator of item 7 = (3 + 5 + 6)/2.
✕
With[{mat = Append[zerosumgeometrysymmetric[[#, 1]], 1] & /@ {3, 5, 6}}, TraditionalForm[ Row[{Det[MatrixForm[mat]], " = ", Factor[Det[mat]] == 0, "\n compare to ", Expand[Denominator[zerosumgeometrysymmetric[[7, 1, 1]] ]]}]]] 
But I should have expected this. The solution for 18 points is next. The point 9 is at infinity! Therefore, level 9 needs 1/0 to work properly.
✕
zerosumGraphic[zerotripsymm[orchardsolutions[[18, 4]], 8], 18, 2 {260, 210}] 
Here's a contour plot of all the orchardplanting polynomials up to order 28. The number values give the location of a particularly elegant solution for that number of points.
✕
allorchardpolynomials = Table[orchardsolutions[[ff, 5]] == 0, {ff, 11, 27, 2}]; Graphics[{ContourPlot[ Evaluate[allorchardpolynomials], {e, 3/2, 2}, {f, 3/2, 2}, PlotPoints > 100][[1]], Red, Table[Style[Text[n, Take[orchardsolutions[[n, 4]], 2]], 20], {n, 11, 28}]}] 
Recall from the construction that e and f should not be 0 or 1, since that would cause all subsequent points to overlap on the first five points, causing degeneracy. The curves intersect at these values.
We can also plot the locations where the e f values lead to lines of two points having the same slope. Forcing parallelism leads to hundreds of extra curves. Do you see the lowerright corner where the green curve is passing through many black curves? That's the location of the sporadic 16point solution. It's right there!
✕
slope[{{x1_, y1_}, {x2_, y2_}}] := (y2  y1)/(x2  x1); theslopes = {#  1, FullSimplify[ slope[Prepend[ First /@ Take[zerosumgeometrysymmetric, 11], {0, 0}][[#]]]]} & /@ Subsets[Range[ 10], {2}]; sameslope = {#[[2, 1]], #[[1]]} & /@ (Transpose /@ SplitBy[SortBy[{#[[1]], #[[2, 1]] == Simplify[#[[2, 2]]]} & /@ ({#[[1]], Flatten[#[[2]]]} & /@ SortBy[ Flatten[Transpose[{Table[#[[ 1]], {Length[#[[2]]]}], (List @@@ # & /@ #[[ 2]])}] & /@ Select[{#[[1]], Solve[{#[[2, 1]] == #[[2, 2]], d != (b c)/a , e != 0, e != 1, f != 0, f != 1}]} & /@ Take[SortBy[(Transpose /@ Select[Subsets[theslopes, {2}], Length[Union[Flatten[First /@ #]]] == 4 &]), Total[Flatten[#[[1]]]] &], 150], Length[StringPosition[ToString[FullForm[#[[2]]]], "Complex"]] == 0 && Length[#[[2]]] > 0 &], 1], Last]), Last], Last]); Graphics[{Table[ ContourPlot[ Evaluate[sameslope[[n, 1]]], {e, 3/2, 2}, {f, 3/2, 2}, PlotPoints > 50, ContourStyle > Black][[1]], {n, 1, 162}], Red, Table[ContourPlot[ Evaluate[allorchardpolynomials[[n]]], {e, 3/2, 2}, {f, 3/2, 2}, PlotPoints > 50, ContourStyle > Green][[1]], {n, 1, 18}], Tooltip[Point[#], #] & /@ Tuples[Range[6, 6]/4, {2}] }] 
That's my way to find sporadic solutions. The mirror and skew plots have added levels of messiness sufficient to defy my current ability to analyze them.
Is there an easy way to generate these polynomials? I have no idea. Here are plots of their coefficient arrays.
✕
Column[{Text@ Grid[{Range[11, 22], With[{array = CoefficientList[#, {e, f}]}, With[{rule = Thread[Apply[Range, MinMax[Flatten[array]]] > Join[Reverse[ Table[ RGBColor[1, 1  z/Abs[Min[Flatten[array]]], 1  z/Abs[Min[Flatten[array]]]], {z, 1, Abs[Min[Flatten[array]]]}]], {RGBColor[1, 1, 1]}, Table[ RGBColor[1  z/Abs[Max[Flatten[array]]], 1, 1], {z, 1, Abs[Max[Flatten[array]]]}]]]}, ArrayPlot[array, ColorRules > rule, ImageSize > Reverse[Dimensions[array]] {7, 7}, Frame > False ]]] & /@ (#[[5]] & /@ Take[orchardsolutions, {11, 22}])}, Frame > All], Text@Grid[{Range[23, 28], With[{array = CoefficientList[#, {e, f}]}, With[{rule = Thread[Apply[Range, MinMax[Flatten[array]]] > Join[Reverse[ Table[ RGBColor[1, 1  z/Abs[Min[Flatten[array]]], 1  z/Abs[Min[Flatten[array]]]], {z, 1, Abs[Min[Flatten[array]]]}]], {RGBColor[1, 1, 1]}, Table[ RGBColor[1  z/Abs[Max[Flatten[array]]], 1, 1], {z, 1, Abs[Max[Flatten[array]]]}]]]}, ArrayPlot[array, ColorRules > rule, ImageSize > Reverse[Dimensions[array]] {7, 7}, Frame > False ]]] & /@ (#[[5]] & /@ Take[orchardsolutions, {23, 28}])}, Frame > All]}, Alignment > Center] 
✕
Grid[Partition[Table[Quiet@ zerosumGraphic[ If[orchardsolutions[[n, 2]] > orchardsolutions[[n, 3]], orchardsolutions[[n, 6]], Quiet@zerotripsymm[orchardsolutions[[n, 4]], Floor[(n  1)/2]]], n, {260, 210}], {n, 9, 28}], 2]] 
Looking for unsolved problems of the orchardplanting variety? Here are several I suggest:
And if you'd like to explore more recreational mathematics, check out some of the many entries on the Wolfram Demonstrations Project.
]]>The Wolfram Language is essential to many Bridges attendees’ work. It’s used to explore ideas, puzzle out technical details, design prototypes and produce output that controls production machines. It’s applied to sculpture, graphics, origami, painting, weaving, quilting—even baking.
In the many years I’ve attended the Bridges conferences, I’ve enjoyed hearing about these diverse applications of the Wolfram Language in the arts. Here is a selection of Bridges artists’ work.
George Hart is well known for his insanely tangled sculptures based on polyhedral symmetries. Two of his recent works, SNOBall and Clouds, were puzzled out with the help of the Wolfram Language:
This video includes a Wolfram Language animation that shows how the elements of the Clouds sculpture were transformed to yield the vertically compressed structure.
One of Hart’s earliest Wolfram Language designs was for the Millennium Bookball, a 1998 commission for the Northport Public Library. Sixty wooden books are arranged in icosahedral symmetry, joined by cast bronze rings. Here is the Wolfram Language design for the bookball and a photo of the finished sculpture:
One of my favorite Hart projects was the basis of a paper with Robert Hanson at the 2013 Bridges conference: “Custom 3DPrinted Rollers for Frieze Pattern Cookies.” With a paragraph of Wolfram Language code, George translates images to 3Dprinted rollers that emboss the images on, for example, cookie dough:
It’s a brilliant application of the Wolfram Language. I’ve used it myself to make cookieroller presents and rollers for patterning ceramics. You can download a notebook of Hart’s code. Since Hart wrote this code, we’ve added support for 3D printing to the Wolfram Language. You can now send roller designs directly to a printing service or a local 3D printer using Printout3D.
Christopher Hanusa has made a business of selling 3Dprinted objects created exclusively with the Wolfram Language. His designs take inspiration from mathematical concepts—unsurprising given his position as an associate professor of mathematics at Queens College, City University of New York.
Hanusa’s designs include earrings constructed with mesh and region operations:
… a pendant designed with transformed graphics primitives:
… ornaments designed with ParametricPlot3D:
… and a tea light made with ParametricPlot3D, using the RegionFunction option to punch an interesting pattern of perforations into the cylinder:
Hanusa has written about how he creates his designs with the Wolfram Language on his blog, The Mathematical Zorro. You can see all of Hanusa’s creations in his Shapeways shop.
William F. Duffy, an accomplished traditional sculptor, also explores forms derived from parametric equations and cast from largescale resin 3D prints. Many of his forms result from Wolfram Language explorations.
Here, for example, are some of Duffy’s explorations of a fifthdegree polynomial that describes a Calabi–Yau space, important in string theory:
Duffy plotted one instance of that function in Mathematica, 3Dprinted it in resin and made a mold from the print in which the bronze sculpture was cast. On the left is a gypsum cement test cast, and on the right the finished bronze sculpture, patinated with potassium sulfide:
On commission from the Simons Center for Geometry and Physics, Duffy created the object on the left as a bronzeinfused, stainless steel 3D print. The object on the right was created from the same source file, but printed in nylon:
Duffy continues to explore functions on the complex plane as sources for sculptural structures:
You will be able to see more of Duffy’s work, both traditional and mathematical, on his forthcoming website.
Robert Fathauer uses the Wolfram Language to explore diverse phenomena, including fractal structures with negative curvature that are reminiscent of natural forms. This print of such a form was exhibited in the Bridges 2013 art gallery:
Fathauer realizes the ideas he explores in meticulously handcrafted ceramic forms reminiscent of corals and sponges:
One of Fathauer’s Mathematicadesigned ceramic works consisted of 511 cubic elements (!). Here are shots of the Wolfram Language model and its realization, before firing, as a ceramic sculpture:
Unfortunately, in what Fathauer has confirmed was a painful experience, the sculpture exploded in the kiln during firing. But this structure, as well as several other fractal structures designed with the Wolfram Language, is available in Fathauer’s Shapeways shop.
Martin Levin makes consummately crafted models that reveal the structure of our world—the distance, angular and topological relationships that govern the possibilities and impossibilities of 3D space:
What you don’t—or barely—see is where the Wolfram Language has had the biggest impact in his work. The tiny connectors that join the tubular parts are 3D printed from models designed with the Wolfram Language:
Levin is currently designing 3Dprinted modules that can be assembled to make a lostplastic bronze casting of a compound of five tetrahedra:
The finished casting should look something like this (but mirrorreversed):
Henry Segerman explored some of the topics in his engaging book Visualizing Mathematics with 3D Printing with Wolfram Language code. While the forms in the book are explicitly mathematical, many have an undeniable aesthetic appeal. Here are snapshots from his initial explorations of surfaces with interesting topologies…
… which led to these 3Dprinted forms in his Shapeways shop:
His beautiful Archimedean Spire…
… was similarly modeled first with Wolfram Language code:
In addition to mathematical models, Segerman collaborates with Robert Fathauer (above) to produce exotic dice, whose geometry begins as Wolfram Language code—much of it originating from the Wolfram MathWorld entry “Isohedron”:
In addition to constructing immersive virtual reality hyperbolic spaces, Elisabetta Matsumoto turns highpower mathematics into elegant jewelry using the Wolfram Language. This piece, which requires a full screen of mathematical code to describe, riffs on one of the earliest discovered minimal surfaces, Scherk’s second surface:
Continuing the theme of hyperbolic spaces, here’s one of Matsumoto’s Wolfram Language designs, this one in 2D rather than 3D:
You can see Matsumoto’s jewelry designs in her Shapeways shop.
Father and son Koos and Tom Verhoeff have long used the Wolfram Language to explore sculptural forms and understand the intricacies of miter joint geometries and torsion constraints that enable Koos to realize his sculptures. Their work is varied, from tangles to trees to lattices in wood, sheet metal and cast bronze. Here is a representative sample of their work together with the underlying Wolfram Language models, all topics of Bridges conference papers:
Three Families of Mitered Borromean Ring Sculptures
Mitered Fractal Trees: Constructions and Properties
Folded Strips of Rhombuses, and a Plea for the Square Root of 2 : 1 Rhombus
Tom Verhoeff’s YouTube channel has a number of Wolfram Language videos, including one showing how the last of the structures above is developed from a strip of rhombuses.
In 2015, three Verhoeff sculptures were installed in the courtyard of the Mathematikon of Heidelberg University. Each distills one or more mathematical concepts in sculptural form. All were designed with the Wolfram Language:
You can find detailed information about the mathematical concepts in the Mathematikon sculptures in the Bridges 2016 paper “Three Mathematical Sculptures for the Mathematikon.”
Edmund Harriss has published two bestselling thinking person’s coloring books, Patterns of the Universe and Visions of the Universe, in collaboration with Alex Bellos. They’re filled with gorgeous mathematical figures that feed the mind as well as the creative impulse. Edmund created his figures with Mathematica, a tribute to the diversity of phenomena that can be productively explored with the Wolfram Language:
Loe Feijs and Marina Toetters are applying new technology to traditional weaving patterns: puppytooth and houndstooth, or pieddepoule. With Wolfram Language code, they’ve implemented cellular automata whose patterns tend toward and preserve houndstooth patterns:
By adding random elements to the automata, they generate woven fabric with semirandom patterns that allude to houndstooth:
This video describes their houndstooth work. You can read the details in their Bridges 2017 paper, “A Cellular Automaton for Pieddepoule (Houndstooth).”
You can hardly find a more direct translation from mathematical function to artistic expression than Caroline Bowen’s layered Plexiglas works. And yet her craftsmanship and aesthetic choices yield compelling works that transcend mere mathematical models.
The two pieces she exhibited in the 2016 Bridges gallery were inspired by examples in the SliceContourPlot3D documentation (!). All of the pieces pictured here were created using contourplotting functions in Mathematica:
In 2017, Bowen exhibited a similarly layered piece with colors that indicate the real and imaginary parts of the complexvalued function ArcCsch[z^{4}]+Sec[z^{2}] as well as the function’s poles and branch cuts:
Paper sculptor Jeannine Mosely designs some of her origami crease patterns with the Wolfram Language. In some cases, as with these tessellations whose crease patterns require the numerical solution of integrals, the Wolfram Language is essential:
Mosely created these “bud” variations with a parametric design encapsulated as a Wolfram Language function:
If you’d like to try folding your own bud, Mosely has provided a template and instructions.
The design and fabrication of Helaman Ferguson’s giant Umbilic Torus SC sculpture was the topic of a Bridges 2012 paper authored with his wife Claire, “Celebrating Mathematics in Stone and Bronze: Umbilic Torus NC vs. SC.”
The paper details the fabrication of the sculpture (below left), an epic project that required building a gantry robot and carving 144 oneton blocks of sandstone. The surface of the sculpture is textured with a Hilbert curve, a single line that traverses the entire surface, shown here in a photo of an earlier, smaller version of the sculpture (right):
The Hilbert curve is not just surface decoration—it’s also the mark left by the ballhead cutting tool that carved the curved surfaces of the casting molds. The ridges in the surface texture are the peaks left between adjacent sweeps of the cutting tool.
Ferguson attacked the tasks of modeling the Hilbert curve tool path and generating the Gcode that controlled the CNC milling machine that carved the molds with Mathematica:
I too participate in the Bridges conferences, and I use the Wolfram Language nearly every day to explore graphical and sculptural ideas. One of the more satisfying projects I undertook was the basis of a paper I presented at the 2015 Bridges conference, “Algorithmic Quilting,” written in collaboration with Theodore Gray and Nina Paley.
The paper describes an algorithmic method we used to generate a wide variety of singleline fills for quilts. Starting with a distribution of points, we make a graph on the points, extract a spanning tree from it and render a fill by tracing around the tree:
We tested the algorithm by generating a variety of backgrounds for a quilt based on frames of Eadweard Muybridge’s horse motion studies:
Here’s an animation of the frames in the quilt:
Here are four of us, shown as dots, participating in the 2017 Illinois Marathon:
How did the above animation and the indepth look at our performance come about? Read on to find out.
Why do I run? Of course, the expected answer is health. But when I go out for a run, I am really not concerned about my longevity. And quite frankly, given the number of times I have almost been hit by a car, running doesn’t seem to be in my best interest. For me, it is simply a good way to maintain some level of sanity. Also, it is locationindependent. When I travel, I pack an extra pair of running shoes, and I am set. Running is a great way to scope out a new location. Additionally, runners are a very friendly bunch of people. We greet, we chat, we hate on the weather together. And lastly, have you ever been to a race? If so, then you know that the spectator race signs are hilarious, often politically incorrect and Rrated.
I started running longer distances in 2014. Since then, I have completed eight marathons, one of which was the 2015 Bank of America Chicago Marathon. After completing that race, I wrote a blog post analyzing the runner dataset and looking at various aspects of the race.
Since then, we have shifted focus to the Illinois Marathon here in Champaign. While Wolfram Research is an international company, it also makes sense for us to engage in our local community.
The Illinois Marathon does a great job tying together our twin cities of Champaign and Urbana. Just have a look at the map: starting in close proximity to the State Farm Center, the runners navigate across the UIUC campus, through both downtown areas, various residential neighborhoods and major parks for a spectacular finish on Zuppke Field inside Memorial Stadium.
Since its inception in 2009, the event has doubled the number of runners and races offered, as well as sponsors and partners involved. By attracting a large number of people traveling to Champaign and Urbana for this event, it has quite an economic impact on our community. This is also expressed in the amount of charitable givings raised every year.
As you can imagine, here at Wolfram we were very interested in doing a partnership with the marathon involving some kind of data crunching. Over the summer of 2017, we received the full registration dataset to work with. We applied the 10step process described by Stephen Wolfram in this blog post.
We first import a simple spreadsheet.
✕
raw = Import[ "/Users/eilas/Desktop/Work/Marathon/ILMarathon2017/Marathon_\ Results_Modified.csv", "Numeric" > False]; 
The raw table descriptions look as follows:
✕
header = raw[[1]] 
But it’s more convenient to represent the raw data as key>value pairs:
✕
fullmarathon = AssociationThread[header > #] & /@ Rest[raw]; fullmarathon[[1]] 
Wherever possible, these data points should be aligned with entities in the Wolfram Language. This not only allows for a consistent representation, but also gives access to all of the data in the Wolfram Knowledgebase for those items if desired later.
Interpreter is a very powerful tool for such purposes. It allows you to parse any arbitrary string as a particular entity type, and is often the first step in trying to align data. As an example, let’s align the given location information.
✕
allLocations2017 = Union[{"CITY", "STATE", "COUNTRY"} /. fullmarathon]; 
Here is a random example.
✕
locationExample = RandomChoice[allLocations2017] 
✕
Interpreter["City"][StringJoin[StringRiffle[locationExample]]] 
In most cases, this works without a hitch. But some location information may not be what the system expects. Participants may have specified suburbs, neighborhoods, unincorporated areas or simply made a typo. This can make an automatic interpretation impossible. Thus, we need to be prepared for other contingencies. From the same dataset, let’s look at this case:
✕
problemExample = {"O Fallon", "IL", "United States"}; 
✕
Interpreter["City"][StringJoin[StringRiffle[problemExample]]] 
We can fall back to a contingency in such a case by making use of the provided postal code 62269.
✕
With[{loc = Interpreter["Location"]["62269"]}, GeoNearest["City", loc]][[1]] 
As you can see, we do know of the city, but the initial interpretation failed due to a missing apostrophe. In comparison, this would have worked just fine:
✕
Interpreter["City"][ StringJoin[StringRiffle[{"O'Fallon", "IL", "United States"}]]] 
The major piece of information that runners are interested in is their split times. The Illinois Marathon records the clock and net times at six split distances: start, 10 kilometers, 10 miles, 13.1 miles (halfmarathon distance), 20 miles and 26.2 miles (full marathon distance).
✕
random20MTime = RandomChoice["20 MILE NET TIME" /. fullmarathon] 
These are given as a list of three colonseparated numbers, which we want to represent as Wolfram Language Quantity objects.
✕
Quantity[MixedMagnitude[ FromDigits /@ StringSplit[random20MTime, ":"]], MixedUnit[{"Hours", "Minutes", "Seconds"}]] 
As with the Interpreter mentioned before, we also have to be careful in interpreting the recorded times. For the halfmarathon split and longer distances, even the fastest runner needs at least an hour. Thus, we know “xx: yy: zz” always refers to “hours: minutes: seconds”. But for the shorter distances 10 kilometers and 10 miles, this might be “minutes: seconds: milliseconds”.
✕
random10KTime = RandomChoice["10K NET TIME" /. fullmarathon] 
This is then incorrect.
✕
Quantity[MixedMagnitude[ FromDigits /@ StringSplit[random10KTime, ":"]], MixedUnit[{"Hours", "Minutes", "Seconds"}]] 
No runner took more than two days to finish a 10kilometer distance. Logic must be put in to verify the values before returning the final Quantity objects. This is the correct interpretation:
✕
Quantity[MixedMagnitude[ FromDigits /@ StringSplit[random10KTime, ":"]], MixedUnit[{"Minutes", "Seconds", "Milliseconds"}]] 
Once the data has been cleaned up, it’s just a matter of creating an Association of key>value pairs. An example piece of data for one runner shows the structure:
We did not just arrange the dataset by runner, but by division as well. The divisions recognized by most marathons are as follows:
✕
{"Female19AndUnder", "Female20To24", "Female25To29", "Female30To34", \ "Female35To39", "Female40To44", "Female45To49", "Female50To54", \ "Female55To59", "Female60To64", "Female65To69", "Female70To74", \ "Male19AndUnder", "Male20To24", "Male25To29", "Male30To34", \ "Male35To39", "Male40To44", "Male45To49", "Male50To54", "Male55To59", \ "Male60To64", "Male65To69", "Male70To74", "Male75To79", \ "Male80AndOver", "FemaleOverall", "FemaleMaster", "MaleOverall", \ "MaleMaster"} 
For each of these divisions, we included information about the minimum, maximum and mean running times. Since this marathon is held on a flat course and is thus fastpaced, we also added each division’s Boston Marathon–qualifying standard, and information about the runners’ qualifications.
With the data cleaned up and processed, it’s now simple to construct an EntityStore so that the data can be used in the EntityValue framework in the Wolfram Language. It’s mainly just a matter of attaching metadata to the properties so that they have displayfriendly labels.
✕
EntityStore[ {"ChristieClinicMarathon2017" > < "Label" > "Christie Clinic Marathon 2017 participant", "LabelPlural" > "Christie Clinic Marathon 2017 participants", "Entities" > processed, "Properties" > < "BibNumber" > <"Label" > "bib number">, "Event" > <"Label" > "event">, "LastName" > <"Label" > "last name">, "FirstName" > <"Label" > "first name">, "Name" > <"Label" > "name">, "Label" > <"Label" > "label">, "City" > <"Label" > "city">, "State" > <"Label" > "state">, "Country" > <"Label" > "country">, "ZIP" > <"Label" > "ZIP">, "ChristieClinic2017Division" > <"Label" > "Christie Clinic 2017 division">, "Gender" > <"Label" > "gender">, "PlaceDivision" > <"Label" > "place division">, "PlaceGender" > <"Label" > "place gender">, "PlaceOverall" > <"Label" > "place overall">, "Splits" > <"Label" > "splits">> >, "ChristieClinic2017Division" > < "Label" > "Christie Clinic 2017 division", "LabelPlural" > "Christie Clinic 2017 divisions", "Entities" > divTypeEntities, "Properties" > <"Label" > <"Label" > "label">, "Mean" > <"Label" > "mean net time">, "Min" > <"Label" > "min net time">, "Max" > <"Label" > "max net time">, "BQStandard" > <"Label" > "Boston qualifying standard">, "BeatBQ" > <"Label" > "beat Boston qualifying standard">, "NumberBeat" > <"Label" > "count beat Boston qualifying standard">, "RangeBQ" > <"Label" > "within range Boston qualifying standard">, "NumberRange" > <"Label" > "count within range Boston qualifying standard">, "OutsideBQ" > <"Label" > "outside range Boston qualifying standard">, "NumberOutside" > <"Label" > "count outside range Boston qualifying standard">> >}] 
In addition to creating the entity store, the split times also give us an estimate of a runner’s position along the course as the race progresses. Thus we know the distribution of all runners throughout the race course. We took this information and plotted the runner density for each minute of an eighthour race, and combined the frames into a movie.
It would be interesting to see how a single runner compares to the entire field. Obviously we don’t want to make a movie for 1,000+ runners and 500,000 movies for all possible pairs of runners. Instead, we utilized the fact that each runner follows a twodimensional path in the viewing plane perpendicular to the line going from the viewpoint to the center of the map. We calculated these 2D runner paths and superimposed them over the original movie frames. Since before exporting the frames are all Graphics3D expressions in the Wolfram Language, this worked like a charm. We created the one movie to run them all.
Now we need make the data available to the general public in an easily accessible way. An obvious choice is the use of the Wolfram Cloud. The entity store, the runner position data and the density movie are easily stored in our cloud. And with some magic from my terrific coworkers, we were able to combine it all into this amazing microsite.
By default, the movie is shown. Upon a user submitting a specific bib number, the movie is overlaid with individual runner information. Additionally, we are accessing all information stored about this specific runner and their division.
More information about the development of Wolfram microsites can be found here.
Besides the microsite, there are many interesting computations that can be performed that surround the concept of a marathon. I have explored a few of these below.
To give you an idea of the size of the event, let’s look at a few random numbers associated with the marathon weekend. Luckily, WolframAlpha has something to say about all of these.
One thousand two hundred seventeen runners finished the full marathon in 2017. This equals a total of 31,885.4 miles, which is comparable to 2.4 times the length of the Great Wall of China, or the length of 490,000 soccer fields.
✕
WolframAlpha["31885.4 miles", {{"ComparisonAsLength", 1}, "Content"}] 
✕
WolframAlpha["how many soccer fields stretch 31885.4 miles", \ {{"ApproximateResult", 1}, "Content"}] 
The marathon would literally not have ever happened had it not been for Walter Hunter inventing the safety pin back in 1849. About 80,000 of them were used during the weekend to keep bib numbers in place.
✕
WolframAlpha["safety pin", {{"BasicInformation:InventionData", 1}, "Content"}] 
The runners ate 1,600 oranges and 15,000 bananas, and drank 9,600 gallons of water and 1,920 gallons of Gatorade along the race course. WolframAlpha will tell you that 1,600 oranges are enough to fill two bathtubs:
✕
WolframAlpha["How many oranges fit in a bathtub?", \ {{"ApproximateResult", 1}, "Content"}] 
… and contain an astounding 20 kilograms of sugar:
✕
WolframAlpha["sugar in 1,600 oranges", {{"Result", 1}, "Content"}] 
And trust me: 20 miles into the race while questioning all your life choices, a sweet orange slice will fix any problem. But let’s get to the finish line: here the runners finished another 800 pounds of pasta, 1,100 pizzas and another 32,600 bottles of water. The pasta and pizza provided a combined 1.8×10^{6} dietary calories:
✕
WolframAlpha["calories in 800 lbs of pasta and 1100 pizzas", \ {{"Result", 1}, "Content"}] 
But we are not done yet. The theme of the 2017 Illinois Marathon was the 150th birthday of the University of Illinois. Ever tried to pronounce “sesquicentennial”? Going above and beyond, the race administration decided to provide the runners with 70 birthday sheet cakes—each 18×24 inches. Thanks to the folks working at the Meijer bakery, we came to find out that each such cake contains 21,340 calories, totaling close to 1.5 million calories!
✕
Table[WolframAlpha[ "70*21340 food calories", {{"Comparison", j}, "Content"}], {j, 2}] // Column 
Remember the 15,000 bananas I mentioned just a few moments ago? Turns out that their calorie count is comparable to that of the sheet cakes. That might make for a difficult discussion with a child whether “to sheet cake” or “to banana.”
✕
WolframAlpha["calories in 15,000 bananas", {{"Result", 1}, "Content"}] 
What can one do with all those calories? You did just participate in a race, and should be able to splurge a bit on food. Consider a male person weighing 159 pounds running a marathon distance at a nineminutespermile pace. He burns roughly 3,300 calories.
✕
WolframAlpha["Calories burned running at pace 9 min/mi for 26.2 \ miles", IncludePods > "MetabolicProperties", AppearanceElements > {"Pods"}] 
Though not recommended, you could have 32 guiltfree beers that are typically offered after a marathon race, or 17 servings of 2×2inch pieces of sheet cake.
✕
CALORIES PER BEER 
✕
N[\!\(\* NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "3339 food calories/(21340 food calories/108)", Typeset`boxes$$ = RowBox[{ TemplateBox[{ "3339", "\"Cal\"", "dietary Calories", "\"LargeCalories\""}, "Quantity", SyntaxForm > Mod], "/", RowBox[{"(", RowBox[{ TemplateBox[{ "21340", "\"Cal\"", "dietary Calories", "\"LargeCalories\""}, "Quantity", SyntaxForm > Mod], "/", "108"}], ")"}]}], Typeset`allassumptions$$ = {}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.709614`6.302567168615541, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{221., {10., 18.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\)] 
Did I mention weather? Weather in Champaign is an unwelcome participant: one who does not pay a race fee, is constantly in everyone’s way, makes up its mind lastminute, does what it wants and unleashes full force. Though 2017 turned out fine, let’s look at WeatherData for the 2016 and 2015 race weekends.
Last year, the rain set in with the start of the race, lasted through the entire event and left town when the race was over. I was drenched before even crossing the starting line.
✕
Table[WolframAlpha[ "Weather Champaign 4/30/2016", {{"WeatherCharts:WeatherData", k}, "Content"}], {k, 2, 3}] // ColumnY 
But that wasn’t the worst we had seen: in 2015, a thunderstorm descended on this town while the race was ongoing. Thus, the Illinois Marathon is one of the few marathons that actually had to get canceled midrace.
As I mentioned at the very beginning, the runners here at Wolfram Research are a tough crowd, and weather won’t deter us. If you feel inspired and would like to see yourself in a future version of the Marathon Viewer, this is the place to start: Illinois Marathon registration.
With the images from the Juno mission being made available to the public, I thought it might be fun to try my hand at some image processing with them. Though my background is not in image processing, the Wolfram Language has some really nice tools that lessen the learning curve, so you can focus on what you want to do vs. how to do it.
The Juno mission arose out of the effort to understand planetary formation. Jupiter, being the most influential planet in our solar system—both literally (in gravitational effect) and figuratively (in the narrative surrounding our cosmic origin)—was the top contender for study. The Juno spacecraft was launched into orbit to send back highres images of Jupiter’s apparent “surface” of gases back to Earth for study in order to answer some of the questions we have about our corner of the universe.
The images captured by the Juno spacecraft give us a complete photographic map of Jupiter’s surface in the form of colorfiltered, surface patch images. Assembling them into a complete color map of the surface requires some geometric and image processing.
Images from the JunoCam were taken with four different filters: red, green, blue and nearinfrared. The first three of these are taken on one spacecraft rotation (about two revolutions per minute), and the nearinfrared image is taken on the second rotation. The final image product stitches all the singlefilter images together, creating one projected image.
NASA has put together a gallery of images captured through the JunoCam that contains all the pieces used for this procedure, including the raw, unsorted image; the red, green and blue filtered images; and the final projected image.
Let’s first import the specific red, green and blue images:
✕
imgBlue = Import["~/Desktop/JunoImages/ImageSet/JNCE_2017192_07C00061_V01\ blue.png"]; imgGreen = Import["~/Desktop/JunoImages/ImageSet/JNCE_2017192_07C00061_V01\ green.png"]; imgRed = Import[ "~/Desktop/JunoImages/ImageSet/JNCE_2017192_07C00061_V01red.png"]; 
✕
{imgRed, imgGreen, imgBlue} 
To assemble an RGB image from these bands, I use ColorCombine:
✕
jup = ColorCombine[{imgRed, imgGreen, imgBlue}] // ImageResize[#, Scaled[.25]] & 
To clear up some of the fogginess in the image, we need to adjust its contrast, brightness and gamma parameters:
✕
jupInit = ImageAdjust[IMAGE,{.14(*contrast*), .3(*brightness*), 2.(*gamma*)}] 
You can see that there’s a shadowing effect that wasn’t as prominent to begin with in the initial colorcombined image. To prevent the shadowing on the foreground image from disturbing any further analysis, the brightness needs to be uniform throughout the image. I first create a mask that limits the correction to the white area:
✕
newMask = Binarize[jupInit, {0.01, 1}] 
When I apply this mask, I get:
✕
jupBright = BrightnessEqualize[jupInit, Masking > newMask] 
It’s much darker now, so I have to readjust the image. This time, I’m doing it interactively using a Manipulate:
✕
stretchImage[image_] := Block[{thumbnail}, thumbnail = ImageResize[image, Scaled[.7]]; With[{t = thumbnail}, Manipulate[ ImageAdjust[t, {c, b, g}], {{c, 0, "Contrast"}, 5.0, 5.0, 0.01}, {{b, 0, "Brightness"}, 5.0, 5.0, 0.01}, {{g, 2.0, "Gamma"}, 0.01, 5.0, 0.001}, ControlPlacement > {Bottom, Bottom, Bottom} ] ]]; 
✕
stretchImage[IMAGE] 
I use the parameter values I found with the Manipulate to create an adjusted image:
✕
jupadj = ImageAdjust[IMAGE,{.16, 3.14, 1.806}]; 
Any time an image is captured on camera, it’s always a little bit blurred. The Wolfram Language has a variety of deconvolution algorithms available for immediate use in computations—algorithms that reduce this unintended blur.
Most folks who do image processing, especially on astronomical images, have an intuition for how best to recover an image through deconvolution. Since I don’t, it’s better to do this interactively:
✕
deconvolveImage[image_] := Block[{thumbnail}, thumbnail = ImageResize[image, Scaled[.7]]; With[{t = thumbnail}, Manipulate[ ImageDeconvolve[t, GaussianMatrix[n], Method > "RichardsonLucy"], {{n, 0, "Blur Correction Factor"}, 1, 3.0, 0.1}, ControlPlacement > Bottom ] ]]; 
✕
deconvolveImage[jupadj] 
Again, I use the blur correction I found interactively to make an unblurred image:
✕
jupUnblur = ImageDeconvolve[jupadj, GaussianMatrix[1.7], Method > "RichardsonLucy"]; 
And as a sanity check, I’ll see how these changes look side by side:
✕
table = Transpose@ {{"Original", jup}, {"Initial Correction", jupInit}, {"Uniform Brightness", jupBright}, {"Better Adjustment", jupadj}, {"Deconvolved Image", jupUnblur}}; Row[ MapThread[ Panel[#2, #1, ImageSize > Medium] &, table]] 
Now that the image has been cleaned up and prepared for use, it can be analyzed in a variety of ways—though it’s not always apparent which way is best. This was a very exploratory process for me, so I tried a lot of methods that didn’t end up working right, like watershed segmentation or image Dilation and Erosion; these are methods that are great for binarized images, but the focus here is enhancing colorized images.
With Jupiter, there is a lot of concentration on the Great Red Spot, so why not highlight this feature of interest?
To start, I need to filter the image in a way that will easily distinguish three different regions: the background, the foreground and the Great Red Spot within the foreground. In order to do this, I apply a MeanShiftFilter:
✕
filtered = MeanShiftFilter[jupadj, 1, .5, MaxIterations > 10] 
This is useful because this filter removes the jagged edges of the Great Red Spot. Additionally, this filter preserves edges, making the boundary around the Great Red Spot smoother and easy for a computer to detect.
Using Manipulate once again, I can manually place seed points that indicate the locations of the three regions of the image (you can see how much the filter above helped separate out the regions):
✕
Manipulate[seeds = pts; Row[ {Image[jupadj, ImageSize > All], Image[ImageForestingComponents[jupadj, pts] // Colorize, ImageSize > All], Image[ImageForestingComponents[filtered, pts] // Colorize, ImageSize > All] } ], { {pts, RandomReal[Min[ImageDimensions[jupadj]], {3, 2}]}, {0, 0}, ImageDimensions[jupadj], Locator, Appearance > Graphics[{Green, Disk[{0, 0}]}, ImageSize > 10], LocatorAutoCreate > {2, 10} } ] 
The values of the seeds at these places are stored within a variable for further use:
✕
seeds 
Using these seeds, I can do segmentation programmatically:
✕
Colorize[label = ImageForestingComponents[filtered, seeds, 2]] 
With the regions segmented, I create a mask for the Great Red Spot:
✕
mask = Colorize[DeleteBorderComponents[label]] 
I apply this mask to the image:
✕
ImageApply[{1, 0, 0} &, jupadj, Masking > mask] 
This is great, but looking at it more, I wish I had an approximate numerical boundary for the Great Red Spot region in the image. Luckily, that’s quite straightforward to do in the Wolfram Language.
Our interactive rightclick menu helped me navigate the image to find necessary coordinates for creating this numerical boundary:
It’s a handy UI feature within our notebook front end—intuitively guiding me through finding roughly where the y coordinate within the Great Red Spot is at a minimum:
As well as where the x coordinate within that same region is at a minimum:
I also did this for the maximum values for each coordinate. Using these values, I numerically generate ranges of numbers with a step size of .1:
✕
x = Range[144, 275, .1]; y = Range[264, 350, .1]; 
I construct the major and minor axes:
✕
xRadius = (Max[x]  Min[x])/2; yRadius = (Max[y]  Min[y])/2; 
And I approximate the center:
✕
center = {Min[x] + xRadius, Min[y] + yRadius} 
And finally, I create the bounding ellipse:
✕
bounds = Graphics[{Thick, Blue, RegionBoundary[ Ellipsoid[center, {xRadius, yRadius}] ]}] 
This bounding ellipse is applied to the image:
✕
HighlightImage[jupadj, bounds] 
Aside from performing image processing on external JunoCam images in order to better understand Jupiter, there are a lot of builtin properties for Jupiter (and any other planet in our solar system) already present in the language itself, readily available for computation:
✕
\!\(\* NamespaceBox["LinguisticAssistant", DynamicModuleBox[{Typeset`query$$ = "Jupiter", Typeset`boxes$$ = TemplateBox[{"\"Jupiter\"", RowBox[{"Entity", "[", RowBox[{"\"Planet\"", ",", "\"Jupiter\""}], "]"}], "\"Entity[\\\"Planet\\\", \\\"Jupiter\\\"]\"", "\"planet\""}, "Entity"], Typeset`allassumptions$$ = {{ "type" > "Clash", "word" > "Jupiter", "template" > "Assuming \"${word}\" is ${desc1}. Use as \ ${desc2} instead", "count" > "3", "Values" > {{ "name" > "Planet", "desc" > "a planet", "input" > "*C.Jupiter_*Planet"}, { "name" > "Mythology", "desc" > "a mythological figure", "input" > "*C.Jupiter_*Mythology"}, { "name" > "GivenName", "desc" > "a given name", "input" > "*C.Jupiter_*GivenName"}}}}, Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, Typeset`querystate$$ = { "Online" > True, "Allowed" > True, "mparse.jsp" > 0.926959`6.418605518937624, "Messages" > {}}}, DynamicBox[ToBoxes[ AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic, Dynamic[Typeset`query$$], Dynamic[Typeset`boxes$$], Dynamic[Typeset`allassumptions$$], Dynamic[Typeset`assumptions$$], Dynamic[Typeset`open$$], Dynamic[Typeset`querystate$$]], StandardForm], ImageSizeCache>{149., {7., 17.}}, TrackedSymbols:>{ Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}], DynamicModuleValues:>{}, UndoTrackedVariables:>{Typeset`open$$}], BaseStyle>{"Deploy"}, DeleteWithContents>True, Editable>False, SelectWithContents>True]\)["Properties"] // Take[#, 30] & 
Included here is a textured equirectangular projection of the surface of Jupiter: perfect for 3D reconstruction!
✕
surface = Entity["Planet", "Jupiter"][ EntityProperty["Planet", "CylindricalEquidistantTexture"]] // NestList[Sharpen, #, 2] & // #[[1]] & 
Using this projection, I can map it to a spherical graphic primitive:
✕
sphere[image_] := Block[{plot}, plot = SphericalPlot3D[1, {theta, 0, Pi}, {phi, 0, 2 Pi}, Mesh > None, TextureCoordinateFunction > ({#5, 1  #4} &), PlotStyle > Directive[Texture[image]], Lighting > "Neutral", Axes > False, Boxed > False, PlotPoints > 30] ] 
✕
sphere[surface] 
I started out knowing next to nothing about image processing, but with very few lines of code I was able to mine and analyze data in a fairly thorough way—even with little intuition to guide me.
The Wolfram Language abstracted away a lot of the tediousness that would’ve come with processing images, and helped me focus on what I wanted to do. Because of this, I’ve found some more interesting things to try, just with this set of data—like assembling the raw images using ImageAssemble, or trying to highlight features of interest by color instead of numerically—and feel much more confident in my ability to extract the kind of information I’m looking for.