Space scientists are used to dealing with things out of this world. To properly study this complex, real-time, data-abundant domain demands powerful digital infrastructure and broad technical expertise. Southwest Research Institute brought together a diverse team of experts to break new ground by moving space science, manufacturing, and project management applications into the cloud, the virtual world beyond individual computers.
Robert Thorpe is a senior program manager who oversees the development and implementation of software data systems that support NASA and ESA missions. He spoke with Technology Today about how many of these are now oﬀered as high-availability, cloud-based applications.
In 2011, Technology Today interviewed Robert L. Thorpe, a senior program manager in SwRI’s Space Science and Engineering Division, about the Project Information Management System (PIMS), a specialized software tool for project managers. Since then, increased demands for both capacity and reliability have led to the development of a line of high-availability, cloud-based applications. We recently sat down to ask him about the transition.
Cloud computing provides access to applications and services via the internet, instead of from your hard drive or local servers. Cloud providers manage remote infrastructure and platforms that run applications. Cloud applications fall into the “software as a service” or “on-demand software” category.
Robert Thorpe: Of course. Back in the late 1990s, I was on a team supporting the Ion and Neutral Mass Spectrometer, one of the science instruments aboard NASA’s Cassini mission to Saturn. Near the end of one of the team meetings, the project manager began handing out action items on sheets of paper. To make a long story short, I suggested creating a web application to track our action items. It seemed like a good idea to digitally track and document progress. To counteract resistance about cost and schedule, I said I could do it in a week. I knew it could be done with some newer web-database technologies. This generated a certain amount of disbelief, so as a last resort I made a bet that it could be done in a week.
The manager wasn’t convinced, but a week wasn’t much to risk so he took a chance. A week later, the Action Item Management System — now called PIMS (pims.swri.org) — was born. Since then, PIMS has been adopted by not only NASA and European Space Agency missions, but also by commercial clients. It supports hundreds of multi- million, even billion dollar projects simultaneously. We began licensing it for customer sites, eventually offering it as a cloud application. That’s the most recent evolution — migrating PIMS into the cloud.
In the meantime, we’ve developed more than 30 different science, manufacturing, and project management production applications. Some are large, some are small. Some, like our Progress project-based manufacturing software (progress.swri.org) and our Juno Science Operations Center (JSOC), need to be online continuously. The respective clients want zero downtime, or as close as we can get.
RT: It’s the opposite of mass production or in-line manufacturing. In-line manufacturing is used to make large quantities of a standardized product, things like smartphones, televisions, cars, and other assembly-line products. There are a lot of great software products out there that support in-line manufacturing. However, with project-based manufacturing, you’re building one or two or small groups of products, and for space applications, you’ve really got to get them right. Examples include building space science instruments, space-qualified custom computer boards, and custom parts for spacecraft, which are a few of the things we do here at SwRI. We have found very few commercial software products that specifically address managing the high-quality, small-quantity manufacturing the way Progress does. If you’re doing this sort of project-based manufacturing, SwRI’s Progress manufacturing software is for you. It’s become such an integral part of our manufacturing process, we need it to be online 24/7/365.
RT: JSOC is the science planning and data management center for the NASA Juno mission to Jupiter (missionjuno.swri.edu). The JSOC data system was designed and deployed at SwRI. It’s key to planning the science activities for the Juno mission.
Scientists are passionate about the data they collect from planets like Saturn, Jupiter, and Pluto. However, there’s only so much data that can be transmitted back from the spacecraft. You can think of it as a fixed-capacity pipeline that exists from Jupiter back to Earth — only so much data can go through it. Each Juno science instrument is collecting data during each Jupiter orbit and especially during the flybys. The teams for the 11 science investigations work together to allocate the “pipeline” resources based on science priorities. Additionally, the science teams monitor the data flowing through the pipeline in near-real-time as the data return to earth, to see if they should adjust their plan for the next Jupiter orbit and flyby.
The JSOC data system is the platform for all this planning and collaboration. It also supports data management, document management, and archiving science data to the NASA Planetary Data System nodes, which puts data out in the public domain. The project team wants this large data system continuously available.
RT: Based on risk analysis for our manufacturing environment, we determined that Progress is such a critical application, it can’t be offline more than 5 minutes, ever. Putting it in the cloud makes it a lot easier to keep it online. JSOC was another easy candidate to be a cloud application. Originally, the mission was planned for orbits around Jupiter that were only 14 days long, so we knew from the beginning the science teams would want JSOC online constantly. The science and operations pace was going to be very intense. Due to changes in the mission after arriving at Jupiter, the orbits are at a more relaxed 53 days. Still, everyone wants the application on all the time. That did not get relaxed.
RT: Although it does require specialized expertise, setting up a basic cloud architecture is not too complex. The first cloud we set up was pretty easy, mainly because space sciences didn’t set it up at all. We arranged a deal with SwRI’s Information Technology Center (ITC) to duplicate a cloud they already had in place. The space science division provided the hardware, and ITC set it up and now manages day-to-day operations. In return, they use a percentage of the cloud for other SwRI divisions that are interested in cloud application support. We use it for a lot of our support applications that need to be online most of the time but don’t have special uptime requirements or lots of customers.
The cloud we developed for PIMS, Progress, and JSOC was significantly more complex than the first one. It had special requirements, including completely redundant hardware in two different buildings, many terabytes of storage on advanced storage arrays, and high-end relational database failover capability. This setup essentially means any single piece of cloud hardware could fail and everything would still run.
For the second high-end cloud, lots of people helped. Project managers and division management helped define requirements and get equipment purchased. SwRI’s IT experts did all the technical “heavy lifting” setting up the cloud — configuring the servers, networking, and high-end database technologies. That’s one of two things I’ve always liked about SwRI: First, I’ve never met anyone who didn’t appreciate a good idea. Second, SwRI is not too small to find someone who can help, and not so big that lots of red tape gets in the way.
RT: Absolutely. It’s a lot easier than buying individual servers and setting them up. One example is the recent launch of our Supernova Analysis Application, or SNAP, as a cloud application. It provides functions to import models of supernovas as well as actual supernova observations, and correlates the two using scientific formulas and database links. With new supernova-detecting surveys coming online, the full-sky telescope observations will be pouring in. SNAP is an excellent example of an application that fits easily into a cloud architecture to provide web-based supernova analysis capabilities.
Relational database failover supports high-availability, high-reliability applications.
A relational database is a collection of data organized as a set of tables, which allows data to be accessed or reassembled in many different ways without affecting the database tables. Failover protects computer systems from failure; secondary equipment automatically takes over when components fail.
RT: It really depends on what the client needs. Cloud-based applications are much easier to maintain, but well-trained cloud experts must be available. It’s very important to maintain redundant expertise in cloud technologies, so if one of your cloud specialists wins the lottery and leaves, you’re not left out in the cold. SNAP, JSOC, Progress, and PIMS fit well into the cloud environment. Other applications fit best on physical servers, and sometimes we use physical servers as backup servers or for other tasks where things are better kept simple.
RT: Yes, for example, Progress and PIMS have supported more than 150 NASA, ESA, commercial, and defense projects to date. If a client purchases one of these applications, we are happy to host it in our cloud. While there are commercial clouds we could use, we prefer hosting apps in our cloud where we have direct control over security and backups. Plus, we don’t have to contend with issues storing proprietary data in a third-party cloud.
Once we built our clouds, and placed our applications in them, it became easy to clone these applications for other clients, who get all the benefits of an application being in the cloud. Additionally, if you need an individual application to run faster, you just allocate more cloud resources to the virtual server. If you’re running low on cloud resources, you just add more hardware to the cloud.
RT: Compared to legacy systems, the complementary PIMS and Progress tools improve responsiveness by three or four times: expediting on-time deliveries, customer response time, troubleshooting, and problem resolution. They have become key components of our manufacturing and project management processes and provide a highly integrated foundation for our electronic quality management system. Our cloud-based architecture helps us keep these critical systems online, and also directly aligns with today’s modern project teams, which may be spread out across the U.S. or even the world.
RT: For now, the wave of the future is cloud computing. We’re doing our best to ride the right part of the wave — not too far out in front where you can wipe out, but not so far behind where you miss the wave entirely. We want to be in the ideal place to provide our clients with the best possible solutions.
As far as what comes next, that’s somewhat challenging to predict. Maybe in 10 or 20 or 30 years, we’ll be able to run something as complex as our second cloud on two quantum memory, fusion- powered super-smartphones that synchronize with each other and provide services anywhere in the world … or maybe even off-planet.
Whatever technologies evolve, we’ll always aim to bring the appropriate technical experts together to provide the best possible solutions for our clients.
Questions about this article? Contact Robert Thorpe at 210.522.2848.
Subscribe to Technology Today.