An Analytic Advantage
It’s important not to overlook the importance of efficient biomass supply chain systems in the success of the biorefining industry. No matter how impressive a future commercial-scale plant’s conversion technology may be, the facility could easily fail if feedstock cannot be delivered in a timely, cost-effective, efficient manner. A new analysis tool, known as “BioFeed,” is showing great promise in helping design supply chain systems that can meet those requirements, ensuring a reliable flow of feedstock.
The BioFeed model is the product of University of Illinois researchers funded by BP’s Energy Biosciences Institute under a research program titled Engineering Solutions for Biomass Feedstock Production. According to Kuan Chong Ting, a professor of agricultural and biological engineering who leads the research program, the program itself includes five different task areas including harvesting, crop monitoring, transportation, storage, and systems informatics and analysis.
While the first four tasks involve developing better technologies for those particular components of the supply chain system, Ting says that the goal of the fifth task is to try to fit the other four tasks together in a way that makes the entire biomass system—from farm operations to delivery to the biorefinery gate—more efficient. “Within that task, we need to have a computational tool to help analyze what is happening through the value chain, and modeling is one of the most important tools that we can use to do this kind of work,” Ting says.
About seven of the 20 researchers that currently work under the ESBFP program have contributed to the development of the model. They include Yogendra Shastri, visiting research assistant professor in the EBI; Alan Hansen, professor of agricultural and biological engineering; and Luis Rodríguez, assistant professor of agricultural and biological engineering. Rodríguez serves as leader for the task group that designed the model.
Hanson notes that BioFeed also enables the team to evaluate how changes in individual supply chain technologies flow through the entire feedstock supply chain. “We can introduce different types of technologies into that supply chain and see the impact on the overall system,” he says. “This has been really useful as we have proceeded in developing the model and evaluated different scenarios concerning the introduction of different types of technology and storage options. That has been very important in the model development.”
“When we started developing the model, we looked at all the important operations a feedstock or energy crop would go through before it was delivered to the refinery,” Shastri says. Some of the most important of these steps are harvesting, post harvesting packing, such as bailing, grinding or pelleting, handling, storage and transportation. As a result, BioFeed is capable of optimizing more than 300,000 individual variables, including harvest schedules, equipment type, storage sizing, transportation attributes and the logistics involved with moving biomass from one place to another. BioFeed can also take into account regional factors, such as weather patterns, crop yield, farm size and transportation distances.
“What we essentially do is use mathematical equations to model these operations,” Shastri adds. The researchers input the appropriate data into the model, and the results are generated by solving the equations developed by the team. The goal is to discover the best design for a particular biomass supply chain system. “Typical results that we get out of the model [include] what kind of equipment should be used, what the size of the storage facility should be, what should be the biorefinery size, and how the equipment should be operated on a daily basis,” Shastri says. “There are quite a few decisions that have to be optimized, but these are some of the important ones we look at.”
One unique aspect of the model—and one of the reasons so many variables are analyzed—is that it looks at daily operations rather than just designing a supply chain system that ignores seasonal and daily fluctuations during of the year. It tells you how you should operate components of the supply chain, such as transportation and storage on a daily basis. “When you start talking about these issues,” Shastri says, “the number of decisions that you have to make increase exponentially, and that is why we have so many variables in the model.”
Due to the model’s capabilities and the extremely high volume of data and variables it is designed to analyze, it cannot be run on a standard computer. Rather, Shastri notes that the team is currently using a very powerful computer cluster to run the model. While designed to take in a huge amount of data, the model was also designed to be very flexible. “We’ve devised it in such a way that we can apply it to different crops and different geographical regions,” Shastri says. Furthermore, it has been built in a way that allows it to take in different types of data and still run as designed.
Ting explains that there are two different sides to the task team: informatics, or the collection of data, and analysis, which is the processing of that data. “The data provides values for the parameters, and the equations in the model are the analysis part,” he says. “If we separate the data from the model, we can program the model in a way that is very generic.” In other words, you can change the data values as you run different scenarios for different crops, regions and production methods. “That allows us to make the model very representative of what is happening in reality,” Ting says. “With that, we have a tool that many people can benefit from.”
According to Rodríguez, his team has already been approached by several entities hoping the model will be available for use. He says the Illinois Department of Transportation expressed interest in using it to figure out an effective way to produce biomass to power its fleet. While it is likely some version of the tool will be made publicly available in the future, the details around this have not been worked out yet. “I anticipate that we will find ways to publish it in a way that can be distributed, but working with BP right now there are, of course, some intellectual property concerns,” Rodríguez says. “It’s not something that we’ll be rushing out the door with, but given the commercial interest, I think we are looking into that now.” Shastri says the team is working to develop a Web-based interface that would connect to the model. “The idea is that somebody could go on the Web and access the model through a user interface,” he says. “Hopefully in the future, when the model is commercialized, that will be one of the ways to make it accessible to members of the general public.”
Assuming the model is eventually made available, there are several groups of people who could benefit beyond biorefining project developers and investors. Those in the research community could also use it to identify areas where additional research and development are needed. Developing such a model in concert with others developing biomass technologies provides “checks and balances” for both sides, Rodriguez says. Technology developers can identify where new information is needed. “By taking the extra step of integrating those elements in the form of a model,” he says, “we can see how those predicted needs are really going to impact the future of the system quantitatively.” The educational community is another group that could benefit. Universities and colleges are working to educate the industry’s next generation of human capital, Ting says. Teachers would like to have this model to demonstrate how changes in the values for variables can impact the performance of an entire biomass system. “There are many markets for this kind of model,” Ting says. “The question is how do we deliver to the right market? There are some rules we have to follow on campus, and we have to be accountable to our sponsors, so we are making that arrangement to see if we can make this model available to various potential users.”
The team also expects to continue making improvements to the model. “One of the important issues we want to address is uncertainty,” says Shastri. “When you talk about farm operations and production of these crops, there are things that are uncertain. We don’t know what the weather will be. We don’t know how the yield will be. There are systematic ways of using mathematics to address those issues. That would be an important future addition to the model.”
Ting adds that the model is programmed in a way that allows it to answer system-level questions. “We have a whole list of system-level questions we would like to answer,” he says, including those related to what percentage of cost should go to harvesting, transportation, storage and other elements of the supply chain. “When the system-level questions start to expand, we either change the data or we change the model,” he says. “That is where we start to tweak and improve the model. There is no end. There is no perfect model, but our goal is to provide a useable model.”
The team is also working to develop additional modeling and analysis approaches. In particular, says Rodríguez, we are looking into how those interested in establishing a new biomass-to-bioenergy market should choose sites for their facilities. “We also have an agent-based modeling approach to study that, once these investments are made, then how do these systems evolve over time?” he says.
“Finally, we are looking at some modeling analysis work at the low operational level. Not just day-to-day decisions, but within the day, how we can help people make decisions as to how to better operate their systems.”
Author: Erin Voegele
Associate Editor, Biorefining Magazine