Skip to main content

Advancing Catalytic Fast Pyrolysis through Integrated Experimentation and Multiscale Computational Modeling—Video Text Version

This is the text version for the Advancing Catalytic Fast Pyrolysis through Integrated Experimentation and Multiscale Computational Modeling webinar.

>>Erik Ringle: Well hello everyone, and welcome to today’s webinar, “Advancing Catalytic Fast Pyrolysis through Integrated Experimentation and Multiscale Computational Modeling.” I’m Erik Ringle from the National Renewable Energy Lab, and we’re happy you joined us today. We’ll be hearing from Bruce Adkins, Brennan Pecha, and Michael Griffin.

Bruce is a senior scientist in the Chemical Process Scale-Up group at Oak Ridge National Lab. He has 40 years’ experience in modeling, design, scale-up, and operation of thermocatalytic catalysts, processes, and equipment. His process areas include coal conversion, petroleum refining, catalyst manufacturing, and biofields and biochemicals production.

Brennan is a scientist at the national—or at the Renewable Resources and Enabling Sciences Center at the National Renewable Energy Lab, where he has been since 2017. Most of his research focuses on multiscale modeling of biomass pyrolysis and vapor-phase catalysis as a member of the Feedstock-Conversion Interface Consortium, as well as the Consortium for Computational Physics and Chemistry. He believes concerted modeling and basic research can de-risk the technologies that will sustain life for future generations to come.

Lastly, Michael is a senior engineer in the Catalytic Carbon Transformation and Scale-Up Center at the National Renewable Energy Lab. His research interests include surface chemistry, catalysis, reaction engineering, and process development. He is currently the principal investigator for the Catalytic Upgrading of Pyrolysis Vapors project within the ChemCatBio Consortium, and is committed to advancing the foundational science and market impact of decarbonization technologies.

Before we get started, though, I’d like to go over a few housekeeping items so you know how you can participate today. During the webinar, attendees will be in listen-only mode. You can select audio connection options to listen to your computer audio or dial in through your phone. For the best connection, we recommend calling in through a phone line. You may submit questions for our speakers today using the Q&A panel. If you are in full-screen view, click the question mark icon located on the floating toolbar at the lower right side of your screen to open the Q&A panel.

If you are in split-screen mode, the Q&A panel is already open and is located at the lower right side of your screen. You may send in your questions at any time during the presentations. We will collect these and address them during the question-and-answer session at the end. If you have technical difficulties or need help during today’s session, you can also use the chat section to reach me directly. The chat section appears as a comment bubble in your control panel. We are recording today’s webinar and it will be posted at www.chemcatbio.org within the next week or so.

Without further ado, I will pass it over to Mike to kick us off.

>>Mike Griffin: Okay. Hello everyone. Mike Griffin here with my colleagues Brennan Pecha and Bruce Adkins. Thank you, Erik, for the great introduction and thank you to the organizers. We’re excited to be a part of the ChemCatBio webinar series and to have the opportunity to share some of our research with you today. So I’ll kick us off here with an overview of catalytic fast pyrolysis and some of the experimental data that we’ve collected. And then I’ll hand it off to Brennan and Bruce, who will talk a little bit more about some of their computational modeling work. And then we’ll make sure to leave a couple minutes at the end for questions.

Okay, so we’re interested in catalytic fast pyrolysis because it’s an adaptable pathway for the deconstruction and conversion of woody biomass and waste carbon sources to produce biogenic fuel blendstocks and chemical coproducts. And there are several steps involved in this pathway, all the way from biomass to blendstock. And, certainly as an organization, the Bioenergy Technologies Office is actively doing research in each one of these areas. But for the sake of today’s presentation, we’re focusing really on the last three steps. So that’s fast pyrolysis, vapor-phase catalytic upgrading, and then hydrotreating and fractionation. And all three of these are highly relevant to catalytic fast pyrolysis, both within a kind of stand-alone biorefinery concept or within a broader coprocessing strategy with traditional petroleum refineries.

So before jumping into some of the data, just a little bit of background and context. There are several different approaches to catalytic fast pyrolysis, with perhaps the main differentiator being in situ versus ex situ. So for in situ CFP, the catalytic upgrading occurs within the pyrolysis reactor itself, whereas for ex situ CFP, the catalysis happens in a separate downstream reactor system. There are pros and cons to both approaches certainly. But, again, for the sake of this presentation, we’re going to focus in specifically on ex situ CFP. And even within that space, there are different catalysts and reactor configurations that could be utilized.

So historically, a conventional approach has been to use zeolite catalysts in a fluidized bed configuration. And there’s quite a bit of pilot- and even demonstration-scale data that nicely shows the technical feasibility of this approach. But one of the key challenges and barriers to commercialization really comes down to the rapid coking that can occur, which reduces yields, necessitates frequent regeneration cycles, and ultimately ends up driving up costs.

So, as a potential alternative, there’s been quite a bit of fundamental research exploring fixed-bed hydrodeoxygenation. And this work has highlighted opportunities for improved yields and primarily for reductions in coking or removing oxygen in the form of water rather than carbon-containing compounds like CO or CO2. But really, you know, the gap here is that a lot of this data has relied on model compound reaction testing or been performed at the microscale, and not always in a continuous fashion. And so there really is a lack of realistic integrated reaction testing data that serves to increase the risk and the uncertainty of this approach.

So one of our goals with this project, experimentally, has been to help close this gap by providing an industry-relevant experimental assessment of fixed-bed hydrodeoxygenation. And so to do this, the feedstock that we used was a commercially harvested combination of debarked loblolly pine stemwood and forest residues, which was procured and processed by our partners at the Idaho National Lab. And in terms of the catalyst, we used a platinum supported on titanium material that was developed here at the National Renewable Energy Lab. The metal loadings for this catalyst ranged from 0.5 to 2.0 wt %. And importantly here for the data that we’ll show today, all of these materials were prepared with commercially available technical supports, as opposed to powders, for example.

The reactor system that we used was one of NREL’s large bench-scale integrated reactor units. It was operated near atmospheric pressure with about 100 grams of catalyst per experiment in a biomass flow rate of between 100 and 200 grams per hour. And one of the nice things about this reactor is it allowed us to actually collect considerable quantities of upgraded CFP oil for downstream processing, as well as detailed analytics, so on the order of 10 to 12 liters over the experimental duration.

Okay, so just very briefly, kind of touching at a high level on some of the data that’s been collected—and as an aside, I’ll say that there’s certainly more information available and citations shown on this slide and the other slides throughout the presentation. But in terms of key metrics, some of the things that we track are the carbon yield of the process and the oxygen content of the oil. And what we like to see when these are plotted against each other, as shown in the figure on the left, is high yields and low oxygen content. So shifts to the upper-left quadrant are desirable. And, in fact, that is what we observed for the platinum-titanium material compared to a ZSM baseline collected both through our own internal experiments here at NREL as well as data from the literature.

We also wanted to set the stability of this reaction over about 140 reaction regeneration cycles. And, nicely, what we observed after a short induction period was good stability in terms of the carbon yield and the oil oxygen content. I think certainly there’s more work to be done here to really prove out the overall process durability over the long term. But we were very encouraged by this promising proof-of-concept data set. And I think it showed that our catalysts could be regenerated effectively at the bench scale using fairly straightforward oxidative techniques.

Alright, so we work closely with our collaborators Huamin Wang and his team at the Pacific Northwest National Laboratory. And they did some nice work to show that this platinum-titanium CFP oil could be hydrotreated using a single stage for greater than 80 hours without any indication of fouling or plugging. The carbon yields for the hydrotreating step were quite high—so 89%—and the oxygen content was below 0.2%.

And I want to just take a second to emphasize the fact that this was done with a single-stage system. And from a practical standpoint, that’s an important step forward compared to the hydrotreating of raw pyrolysis oil, which often requires multiple stages for stabilization and is prone to issues associated with fouling and plugging during the feeding process. So fractionation of the hydrotreated CFP oil showed about 45 wt % in the gasoline and 39 wt % to the diesel fraction. And, you know, this was pretty high selectivity to the distillate range, maybe a little bit higher than we expected before doing these experiments. So that was a bit of a nice surprise and aligned well with our goals of producing fuels for the heavy-vehicle transportation sector, which is more difficult to electrify in the near term.

I’ll say that the fuel testing that we’ve been able to do for these fractions reveals the need for continued R&D. So the octane values for the gasoline fraction was measured at about 65, and then the cetane value for the diesel was 24. And maybe within—this is acceptable within—for a blendstock or within a refinery integration strategy at low blend rates. But really, we’d like to see these values closer to 85 and 40. And so I think this in itself actually provides a good differentiator for CFP, which really gives us an opportunity to tailor the catalytic steps so that we can potentially control the bio-oil composition and improve some of these fuel properties by promoting things like ring-opening reactions, for example. And this is definitely an ongoing research focus for our team moving forward.

Alright, so in terms of overall process yield, so CFP plus hydrotreating, we saw values of about 36% for platinum-titanium compared to less than 22% for a ZSM-5 process. And we know from techno-economic analysis that this is one of the key cost drivers for the process. And so these improvements really do translate to considerable reductions in the fuel production costs. Our conceptual process models indicate a minimum fuel selling price of about $3.80 for the platinum-titanium pathway. That’s for a fuels-only, stand-alone biorefinery approach.

And so it’s important to point out that there are opportunities for further reduction through things like refinery integration or the generation of chemical coproducts, for example, that could push these costs much lower down towards $3.00. And that’s, again, certainly something that we’re continuing to work on. Importantly also, this pathway provides a considerable reduction in carbon intensity. So life cycle analysis shows a greater than 50% reduction in GHG emissions compared to petroleum-derived gasoline and diesel, for example.

Alright, so in terms of a summary and research needs, our integrated reaction testing confirmed the potential for improved performance from fixed-bed HDO and motivated investigation of process scale-up. So we saw a high yield, improved relative economics, low emissions. The initial assessment of process stability looks good; more work to be done there. And so that really brought us to a point where we’re thinking critically about how this technology translates best to a commercial-scale system.

And as part of this effort, we’ve been really fortunate to have collaborators within the CCPC, which is a computational consortia that I think Brennan and Bruce will speak more about. We’ve been able to leverage those partnerships to perform some really nice particle- and reactor-scale modeling. And this has been instrumental for us as an experimental team to help us get a better understanding of the reaction kinetics and address some of the questions about scale-up and identify some needs for continued R&D in that area. So with that, I’m excited to pass it off here to my colleague Brennan, who is going to talk about some of the multiscale modeling that he and his team have done.

>>Brennan Pecha: Thanks, Mike. In this next part of the talk, I’m going to be discussing how we teased out fundamental information from benchtop packed-bed reactor experiments with multiscale modeling. A lot of promising bioenergy technologies tend to fail at scale-up. Why is this? Well, one of the reasons could be just that scaling factors are not linear, for example, a tubular reactor. The surface area scales to the power of R2, the volumes to the power of R3, and heat transfer can be difficult. That’s just one [inaudible] example.

Modeling can guide engineers moving from bench to pilot scale by having models that accurately represent the physics at all the levels that are relevant. Multiscale frameworks also enable the use of Department of Energy’s high-performance computing capacity, for example, the Eagle system shown in this picture. In this work, we will apply multiscale modeling to catalytic fast pyrolysis vapor-phase upgrading over platinum on titania.

If we can take one step back, we can—we need to understand the multiscale phenomenon before we can understand how to apply them to the models. At the atomic scale, the molecular structure of catalytic sites dictates reactivity, selectivity, and deactivation behavior. Moving up the scale, the micro- and meso-porosity that emerge from the crystal structure or support dictate nanoscale diffusivities of reactants and products and attenuate the reaction at that scale. Even a scale higher, the macro-porosity on the order of microns that emerges from microscale aggregates affects bulk intraparticle transport, again, attenuating the reaction rate.

Finally, the catalyst particles themselves don’t necessarily have an ideal shape. There might be variations in porosity within the catalyst, like shown in this image of one of our ZSM catalysts. And those will impact intraparticle resonance times of reactants and products. And then, finally, the reactor itself can dictate the environment that each of those catalyst particles is exposed to. And a poorly designed reactor will not take advantage of all the catalyst particles present, and you’ll observe an effective reaction rate that’s lower than you might expect from an idealized system.

So in conclusion, the observed reaction rate that we see is really a combination of the intrinsic reaction kinetics, intraparticle diffusion, and extraparticle diffusion and convection. So in all systems, those need to be taken into account when you’re developing a model. In this case, we’re looking just the packed-bed reactor from that benchtop system that Mike described previously. It’s a very well-designed system so that we really don’t have too much radial variation in the diffusion. And that allows us to create a relatively more simplified model than those high-resolution models that have been presented in literature. Next slide, please.

And to look at this in more detail, again, we have a packed bed full of platinum on titania. The hot pyrolysis vapors enter at the inlet, and as they flow over the catalyst particles, they diffuse through the catalyst particles. We can see in this image that the platinum sites are distributed throughout the catalyst, and so the primary vapors from pyrolysis will enter. They’ll go through a first partial deoxygenation reaction and then further be oxygenated to perform hydrocarbons. So it’s really at least a two-stage reaction if we simplify it, and simultaneously we form coke and have some deactivation.

The data can really be interpreted best so that we can fit a tractable model by lumping together the chemical products that we observe in the GC-MS and through other characterization methods. We have low-molecular-weight pyrolysis vapors that we consider highly reactive. Those include ketones, aldehydes, acids, methoxyphenols, and sugars. OX represents partially deoxygenated compounds. That’s one of the products—an intermediate product I would say—for example, phenol, methylphenol, furanics, cyclopentene, and anones, and then hydrocarbons, which are fully deoxygenated aromatics, alkanes. And then we have light gas, LG, coke that’s on the catalyst itself and light condensables like acetaldehyde, acetone, butanone, and then a fraction of high-molecular-weight pyrolysis vapor that is considered less reactive, often in the form of aerosols that can pass right through the reactor system.

We had two different wood feeds going into the pyrolysis reactor before the catalytic reactor: 50/50 clean pine forest residue and 100% clean pine. There were also nice data sets for four different biomass-to-catalyst ratios—in other words, time on stream—as well as 0.5% platinum loading and a 1% platinum loading. So a variety of unique cases, as you can see in this picture; the yields look similar because the team did a really good job at trying to find conditions where you would get similar yields. But we need—we can use this data to validate and develop a model.

Some of the important data that we have is on-stream mass spectrometry and that shows—I’d say rapid deactivation, but that’s a relative term compared to some other catalytic systems. Over the course of 7 hours we do see notable deactivation. And working with the talented experimental team with the modeling team developed a reaction scheme similar to how they deal with fluid catalytic cracking in the petroleum industry, where we’re starting with the reactive low-molecular-weight pyrolysis vapors, form that intermediate partially deoxygenated compounds, and then hydrocarbons. And we believe that there are two distinct active sites: probably an acidic site for that first reaction and the platinum site to transfer the hydrogen for the second reaction. But all the stuff on the right shows side products like light gas and water formation, as well as coke formation, which we consider to deactivate the catalyst.

How does changes to the catalyst properties and operating condition impact process performance metrics like yield, composition, and catalyst lifetime? To address that problem, we need to be able to accurately model multistep reactions. Typically that requires heavy computational resources to directly numerically simulate that system. But that’s not really suitable if we don’t have existing rate constants. In this case, we’re going to have to extract those rate constants by doing parametric sweeps. To do that we need a faster-solving model. An analytical solution for diffusion, reaction, and deactivation, we believe, is mathematically feasible and can help accurately represent multistep reactions.

What exists in the literature, from Chapter 6 of Fogler for all you chemical engineers, is the Thiele modulus, and that’s illustrated here. And that’s an analytical solution for reaction and diffusion, and Rutherford Aris did a lot of work expanding that in the 1970s. But there’s no coupling of intraparticle sequential reactions in this solution. So without digging too much into the heavy math, we start with the unsteady advection-diffusion-reaction equation. Assume a sphere, nondimensionalize it, apply the boundary conditions, and we end up with a form that is commonly seen in differential equation textbooks. And then when the eigenvalues are real, the solution is a hyperbolic function converting back to concentration. And then volume-averaging the rates we end up with an effectiveness of vector, not a factor but a vector, which is a system of equations that can be easily solved using common software like Matlab or Python, using matrix deconstruction, etc. We end up with individual rates for each of the reactants and products that includes the multistep effectiveness vector. More information on that paper below if you’re interested.

What’s useful for that is that provides us a solution for the particles—intraparticle diffusion and reaction. And then we need to apply that to a packed-bed model. In this case, we assumed an axial dispersion model and used method of lines to numerically solve the packed-bed system with the transport equation shown here. And, again, those rates are not just straight reaction rates for the particle. They are solved at each of the nodes for that multistep effectiveness vector, and that gives us a multiscale, fast-solving model. Next slide please?

So to parameterize the model, we did some multiscale imaging of the platinum and titania catalyst particles with the help of Peter Ciesielski. Light microscopy showed the particle size on the left. Scanning electron microscopy gave us the particle surface structure. And then TEM showed us the distribution of platinum sites, and we can see that it’s really well evenly distributed throughout the catalyst particle in approximately 5-nanometer dots.

We apply that, as well as all the other operating conditions, to the model. And we initially guessed the 10 rate parameters for one temperature and fed the entire packed-bed—multiscale packed-bed model through a simplex parameter optimizer in Matlab to essentially fit these rate constants. And we end up with the solution that matches well for this initial base case with half a percent platinum, half-millimeter particles, and biomass-to-catalyst ratio of 12.

To validate the model, remember we have that nice set of four unique cases. The second from the top on the left is that base case that we fit. So, again, we can see the final yields match almost perfectly within the error. For the other cases, which are drastically different in terms of how long they were operated, feed, and the platinum loading, we were still able to accurately capture the yields. And that gave us a lot of confidence to explore different parameter spaces and inform the experimental team how to design catalysts. We can see that in the next slide.

On the left we have coke concentration versus reactor length. And that’s important because one of the outputs of these models was this coke concentration, which is used to inform catalyst regeneration. And we can see that there’s a lot more coke at the entrance than at the exit, and that’s not initially assumed, but we realized that the catalyst—the vapors are traveling over the catalyst. You know, they’re entering at the entrance and, of course, they’re going to form more coke there.

On the right, on the top, we have pressure drop versus particle diameter for different reactor diameters. And we can see that there is a particle diameter limit, as you might expect. If you get too small, your pressure drop is too high. But that needs to be balanced with vapor conversion because better conversion occurs when you have smaller catalyst particles. And we did identify there is in this type of system a nice operating regime, between particle diameter of 1 and 2 millimeters, that gets optimal yield with minimal pressure drop.

In conclusion, a new multiscale simulation framework was capable of capturing multiple cascading reactions, multiple operating conditions, and catalyst loads, as well as active site deactivation. It’s fast and accurate, can be used to mine good old data. Future work will extend the models to other catalyst shapes and other technologies. In the next slide, you’ll see how results from this work were used to design a catalytic regeneration system at a much larger scale with a different set of modeling tools.

>>Erik: So I think at this point, Bruce Adkins is going to give us a presentation. And, Bruce, just a heads-up that we have 15 minutes left to us.

>>Bruce Adkins: Got it, okay. Hey, Brennan, thanks for the nice segue into my part of the presentation. I am going to be focusing more on scale-up, and I’m also going to be focusing more on the physics and chemistry aspects of the scale-up and a lot less on the modeling mathematical details. We do have some information available in the way that Brennan organized it for those of you who might be interested.

Okay, here you can see the unit that we’re trying to scale up. It’s been presented by both Mike and Brennan. I’ve got a nice picture of a catalyst here showing the little 0.5-millimeter platinum-titania spheres. That’s the case we spent most of our time on. You see some of the upgrading features, the process conditions that were used. The most important thing on this slide is the cyclic operation of this reactor system and especially the time spent during the regeneration, where we basically had to decarbonize the catalyst oxidatively to return it to the active state. And in a risk assessment that we did in this project quite some time ago, we all zoomed in on the regeneration step as the one that’s most important because that’s the one where we have very exothermic reactions and high heat-transfer challenges. So let’s go to the next slide.

Okay, the scale-up objective for all of this work was the TCPDU, which is the thermochemical process development unit located at NREL. And, again, this is all focusing on the packed-bed reactor, the PBR. In this unit, early on we picked a catalyst loading of 6 kilograms with 9 kilograms per hour of biomass for a space velocity similar to our small lab-scale unit, but with a scale-up factor of 60, okay. There were some constraints that we were able to identify very quickly. I listed some of them here. Some of them are because of the gas flow and the pressure drop limitations in the actual TCPDU hardware. Some of them are because we wanted to apply constraints that would help us go to the next step in scale-up. For example, the wall heat removal, so that we could mimic an industrial-scale reactor.

Picture on the right shows you a mock-up of a catalytic reactor that we built at the National Energy Technology Laboratory and a couple of extremely qualified guys helping me construct that. We did a number of loading tests. We did a number of pressure drop tests. This is the kind of thing that we used in combination with the modeling to (A) make sure that the physical issues surrounding the reactor in the TCPDU would fit and make sense and (B) that we could get the kind of data that we needed to support the modeling. So next slide, please.

Going a little bit further into that TCPDU system, we possess three existing reactors. And each of these reactors had three internal heating rods. We called them “bayonet heaters,” which can be converted to cooling tubes. And so initially, I mean, from early on, we looked at the overall pressure drop. We looked at some kind of back-of-the-envelope heat-transfer calculations. And we said, look, it’s really better if since we’re not going to operate the unit in full-swing cycle, that we split that 6-kilogram bed between three existing reactors of 2 kilograms each.

So we brought the scale-up factor down to about 20. Now, there were some strict gas limitations based upon like—for example, total nitrogen flow in the lab is about 400 per bed and total compressed airflow limit works out to about 200 SLPM per tube. So, again, this is all drawing the module we have to operate.

Now here, I show you some details of the model. On the left you can see the TCPDU bed with 18 million catalyst pellets. There’s a little 2FBR bed with 900,000, so there you see your scale-up factor of 20. In the middle, I’m showing you a couple of grid resolutions of the actual model using COMSOL to simulate these beds. We don’t need to simulate the entire bed because it has threefold symmetry, so that saves some computational time. The important point here is that we looked at a range of griddings and what I call “particle resolution,” which is how many real particles does each virtual particle have to represent. We covered a range from 120 at the high side to 40 at the low side of the TCPDU. And for the 2FBR for the small scale, we could even get down around 8 or 10. The take-home message there is all of our discretization studies showed us that we had very grid-independent reliable solutions, so this was good.

Last diagram here shows you an illustration—I’m sorry, if you could go back real quick? Yeah, sorry—of how the COMSOL reactive pellet bed handles the properties’ reactions inside the pellet—nowhere near as complicated as what Brennan showed you. But we deemed that for the purpose of a reactor-scale model, where we wanted some provision for intraparticle, intrapellet conditions to be simulated, that this was a good approximation. Next slide, please.

Okay, how did we get the data necessary to be able to tune this model so that we could use it for the TCPDU? I show you a list of parameters here in a couple of different boxes. The most important ones in red are where we really didn’t have a guess or even a very good guess, I should say. Some of these that are not in red are pretty good guesses based upon my experience. But, in particular, the frequency factor for the combustion reaction and the heat-transfer coefficients turned out to the most critical.

The top slide shows you how we were able to tune in the frequency factor for the reaction by looking at that region where the CO2 profile is declining. That’s where the kinetic information lives and that’s where we were able to tune to our frequency factor. If you look at the outlet temperature plot, you see that there are actually four different combinations of thermal factors that actually meet—that actually duplicate the outlet temperature profile very well. So I’m going to come back to what that means on the next slide.

The final slide over to the right you heard Brennan mention that the coke profile predicted by the model developed by his team was very useful in enabling, really, this type of modeling work, and that’s absolutely true. COVID has prevented us from being able to analyze this coke profile experimentally. So with the information provided by Brennan’s model, we were able to keep this ball rolling and take this all the way to model completion and prediction. Next slide, please.

Here’s what the takeaway from the 2FBR modeling data extraction process, and that is what combination of thermal parameters are really—best describe that particular scale of reactor. And remember I told you there were four parameters that gave equal such measurements. I’ve basically drawn a locus here that says probably somewhere anywhere along this line you would have a similar fit. But you look at what this means. All the way over to the left, you’re looking at very low values of solids thermal conductivity and relatively high values of bed heat-transfer coefficient. On the opposite side on the right—on the right it’s the opposite side. Bulk oxide values of thermal conductivity but low bed heat-transfer coefficients.

Now for our sweeping studies, we decided to look at all four of these combinations. But early on, we kind of put our money on the leftmost point that you see in this graph based upon two things: number one, the catalysts that had been studied in the literature in detail do tend to show very low effective solids thermal conductivities. And, number two, most of the published bed heat-transfer correlations for particle sizes in this range are well above 100 W/m·K. So that’s just to let you know that’s where we kind of put the focus for the rest of the study. Next slide, please. 

Here are some images, three-dimensional images of temperatures, temperature radiance, and heat-up and cooldown rates for the two cases with the maximum process air, no cooling air versus minimal process air, and maximum cooling air. And these are the kind of things that we looked at in carrying out this study. If you give me one click there, Mike, you can see, for example, sharp gradients in the region where we’re using the cooling tubes to do most of the heat extraction. Not surprisingly, we would have very sharp gradients there. These are showing up to 300 degrees, say, per centimeter. It can actually be much higher, as I’ll show you in the next slide. And then it’s one more click, Mike. Also a rapid cool down region. We saw this a lot in a lot of our simulations depending upon process conditions and those thermal parameters. In this case, the process gas is flowing slowly. So as the combustion front moves, it leaves a very sharp cooldown region in its wake. Next slide, please.

So went a little bit further. We looked at a lot of different—a lot of other different ways of looking at this information from the model. Also, we looked at the intrapellent gradients that I’m showing you on the upper right to make sure we were handling the diffusional properties of that pellet correctly. We produced reduced-order models, looked at a whole lot of things. And I don’t have time to share all of those details, but I do want to point out there is a publication that we just submitted to Reaction Chemistry and Engineering and I cited here on the bottom. And if anybody’s interested, I can certainly send you a draft version of this presentation.

Now we’ll go to the conclusions from all of this modeling work. Okay, conclusion number one is the risk of catalyst damage and/or accelerated irreversible deactivation from the thermal excursion is high in this proposed design. It’s basically—it’s going to be a tough call to make this design work. The real issue is the very small catalyst particle size, which constrains bed depth and process gas flow, both of which constrain heat removal.

We are considering some design improvements. You heard Brennan mention using reduced-order models to map the catalyst bed design space, not just for upgrading but also for regeneration. And we have started evaluating moving bed alternatives to packed-bed reactors that are not like fluid beds that are more like CCRs, continuous catalyst reformers. No time to talk about that plan here today. One important conclusion that I want everybody to understand is although small by industry standards, a scale-up factor of 20 can be substantial, certainly as we demonstrated here. Next slide.

Okay, I want to say just a couple more words about the improvements in the models that we’ve developed here. Number one, we would like to firm up the conclusions from the regeneration model by addressing the two main key unknowns, and that is the thermal conductivity of the catalyst pellets and the coke distribution, not just in the bed but also ideally within the catalyst particles.

The second thing I’m going to say just a line or two about is an idea that we’ve had to expand this kind of model to include stack beds with multiple catalysts. And we want this also to be able to handle combinations of exothermic and endothermic reactions and even, for example, if we’re decoking a bed, a stacked bed of catalysts where we would have different coke levels. So here’s the illustration of the model as it exists today. Totally—it’s in a fictitious system where we’re basically saturating benzene in an exothermic process and then hydrocracking cyclohexane to butane and ethylene in an endothermic process.

And you see the list of parameters that can be adjusted on a bed-by-bed basis. Basically, everything is possible in this COMSOL model in terms of bed-specific parameters. A couple of examples I’ve shown you where what happens if we switch some of the wall heat transfer off in certain beds? What happens if we apply different temperatures, wall temperatures to the beds as if we had different zones in our furnace? So you can do a lot of possibilities with combinations of conditions in this reactor.

And then the last slide about this model is basically to show you that you can also do the activity gradient, which I’m showing you on the top. Unfortunately, those graphs are tilted to the side, but those are big ranges in activities for each bed. And you can then look at the effect on the overall conversion process as well as the temperatures and fluid properties, and you could see where, for example, the intermediate cyclohexane is being generated and then being depleted. And so this model is something that we definitely intend to be building on for future efforts, including catalytic fast pyrolysis.

And that should bring me to our acknowledgements. And as you can see on this first slide – [crosstalk] Yes?

>>Erik: I’ll just kind of step in. I think we just have a few more minutes, so let’s just jump to the questions.

>>Bruce: Okay, okay, good.

>>Erik: If you have questions, we do have a few minutes, so please enter those into the chat at this time—excuse me—into the Q&A panel at this time. It’s in the lower right-hand side of your screen. If we don’t have time to get to your question today, go ahead and enter it anyways and we will try to get an email to you to get a response as soon as we can. But, again, we do have a few minutes for questions if you have them. I’m not seeing anything immediately, but we’ll just hang out here for a few moments.

>>Mike: Erik, this is Mike Griffin. Just to chime in, our email addresses should be showing on your screen now. If there are questions that we can help address, even outside of this webinar setting, we’re available for that. Please don’t hesitate to reach out. We’d love to talk to anybody who’s interested in more about these topics.

>>Erik: Great. Well, I am not seeing any questions at this point, but I did want to thank Mike—oh, we did have one question come into the chat panel. Can the catalysts be regenerated by heat?

>>Bruce: Can the catalyst be regenerated by heat?

>>Erik: Correct.

>>Bruce: Is that—is the question about as through a purely heating process only, like a thermal desorption, or is through heat in combination with oxygen, which is, of course, what we’re doing now?

>>Erik: Greg, if you want to give it a little more context, I can pass that onto Bruce. And we did have a couple of comments. “Thank you for the interesting lunch and learn. It looks like a lot of progress has been made on the modeling side of things as well as the practical validation. I will enjoy reading those pages as they are published on these topics.” And, Bruce, Greg did follow up. He said, “Run it through the pyrolysis process.”

>>Bruce: Yeah, right so an anaerobic regeneration. Yes and no. There are some—the kind of coke that forms in catalytic fast pyrolysis is not a dense, highly graphitized PNA-type coke like I often—like I’m used to working with in, for example, hydrotreating. So there is some pyrolytic possibilities for cracking and desorbing that coke. Unfortunately, when happens, as typically happens in a cracking process, is you break bonds in the easiest places and can drive off some relatively light material. But then what you’re left back is an even more aromatized or high-molecular-weight fraction, which may be even more difficult to combust using oxidative regeneration. But the short answer is no, you can’t do it all by just a pyrolytic desorption.

>>Erik: Okay, yeah, thanks for that. We do have a few more questions, but I think we are kind of finished up with our time here. I do have these questions recorded, so if we haven’t answered your question today, we will do our best to follow up with you offline. But again, thank you to Mike, Brennan, and Bruce. Thank you everyone for joining today’s webinar. We hope you have a great rest of your afternoon or morning, wherever you’re at. Thank you.

[End of Audio]