October 31, 2007

Process Engineers - Real Challenges Or Mistakes

Many forums or dicussion board / Blogs will tell you about the role of process engineers in the chemical manufacturing industry. But you may rarely find out the real technical challenges faced by a process engineer. Recently I faced some of them & then thought to write on this issue.

The most important out of several ones is the availability of CORRECT DATA.

The role of process engineer includes the following:
  1. Process Scheme Development using technical improvement measures.
  2. Process Improvement using technological developemnts.
  3. Process Troubleshooting.
  4. Productivity & Efficiency Improvements.
  5. Energy Conservation & Management activities.
  6. Technology Transfer etc.

Therefore, you need to search lot of data required for your analysis. Out of this Chemical properties data, if available, is correct ~98% of the time & for remaining 2%, if any variation is there, it may be due to various other reasons e.g. the product specifications are different, product is not so common, not listed in standard database etc.

However, the data e.g. solubility, Distribution coefficient etc is varying widely. For example recently I was working on a job of recovering Acetic Acid from Liquid effluent. My team member listed few solvents used for LLE of acetic acid. MTBE was one of them.

Based on our selection criteria on different parameters when we shortlisted 2-3 solvents as most probable & economical options, we found that MTBE topped the list due to high distribution coefficient of 3.8 (As listed in one of the papers, I wont mention the link) and very low boiling point of ~55°C.

So we planned for it & invested time & money when we carried out our experiment, it was found that the D was ~0.95 or so. Initially, we were surprised by those kind of results, then after repeat tests & analysis, we re-started searching literature & found that it is also listed as 0.6.

So which one is correct?????????

Why should we rely on them????

These 2 are the critical questions, & I am sure it is faced by many process engineers from time to time.

Plan for intital feasibility based on the data and do some lab work before you propose anything if you don't have any prior experience on the concerned issue/system.

In the second case, if you go for any improvement planning in your existing processes be it capacity, energy saving, equipment performance or anything - you again need base data. In most of the cases you practically have Temperatures & Pressure available on your DCS or field instrument. However you do not have critical flow rates which are more important sometimes than P&T data alone.

For example when you check the performance of pumps specially in case of cooling water pumps you generally dont have accurate flow measurement.

In such cases we need to do some fundamental work of heat & mass balance across critical equipment(s). Or some tricky experience based like I use shutoff head and actual head to find out flow measurement alongwith system curve plotted based on different Q & H combinations.

Why industry do not push for providing sufficient gadgets required for efficiency / performance measurements which helps a lot in future in identifying problems, improving performances, production etc.? The cost of such installations is only a fraction of what can be saved from these exercise results.

Why a process engineer is required to do so much; wasting time & energy??????????

The problem lies with the process / project engineers itself. They do not recognize the importance of such gadgets at the conceptualization stage of the project and do not add up the fractional cost to keep themselves under targetted budget.

Everyone of us should now onwards focus; on putting these instruments in the field starting from the prelim stage of the project. It will help & save lot of money for your company and for you as well.

Get Free Updates:
*Please click on the confirmation link sent in your Spam folder of Email*

Continue to read this post...

October 25, 2007

CompaBloc - Shell & Tube may become History in future

CompaBloc - A True Cross Flow Plate Heat Exchanger - from Alfa Laval may soon replace big size conventional S&Ts in the chemical process industry due to their very high U, Clean Service for longer, Very low (~25-35%) footprint area requirement. The current article is based on our actual experience in our plant.

We were using a conventional S&T Feed Effluent exchanger in Glycol plant. It was a S&T with ~400 M2 area for 2.0 Gcal/hr heat load with estimated U of ~625 Kcal/hr/m2/°C, whereas CompaBloc used for this service is having only 131 M2 area for ~2.8 Gcal/hr with estimated U of 2850 Kcal/hr/m2/°C which is more than 4 times than conventional S&T.


Alfa Laval (India) Ltd has introduced a high quality welded plate heat exchanger, Compabloc, that is designed for easy access, long and low maintenance, resulting in low lifecycle costs. Compabloc is designed for high thermal performance and compact installation and is available in stainless steel and exotic metal alloys. Although fully welded, it is accessible on both sides by removing side panels for cleaning, maintenance or repair.

The heart of this unit consists of welded corrugated heat transfer plates, pressed in SS or exotic materials. The absence of gaskets allows it to handle high temperature fluids and operate in chemically aggressive environment. It is ideal for any industrial application requiring efficient and economic heat transfer in a fully welded design required normally in hydrocarbon processing industries.

There are thousands and thousands of Compabloc plate heat exchangers working in various processes all over the world. For their users, they are true productivity boosters. The design of the plates inside the Compabloc has been improved, and the result is an even better operating performance and longer life.

Changes in the plate configuration provide more contact points between individual plates, which adds to the structural strength. The use of laser-welding enables the Compabloc to withstand higher pressures and temperatures. As a result, they provide up to 7% more heat transfer efficiency than previous models of CompaBloc itself.

The best news is, thanks to the modular design of Compabloc, upgrading is a very simple, straightforward process. Replace your present CP block with the new CPL block and you get a new Compabloc at only a fraction of the price.

Performance & Features
Copabloc is available in areas from 1 - 320 Square meters. The largest CP requires only 1 x 1 M2 Footprint area which saves on installation space & is very handy in case of revamps / debottleneking of existing units where normally the size / space is a major constraint for bigger size new equipments.

They can operate from FV to 35 Bar pressures and from -40°C to 350°C making them usfeul for almost every process application. Orientationwise they can be horizontal / vertical / suspended also. Suitable for Pharma, Hydrocarbon, Petrochemicals etc.

Get Free Updates:
*Please click on the confirmation link sent in your Spam folder of Email*

Continue to read this post...

October 08, 2007

Parallel Pumping - Its a Team Effort

When multiple pumps operate continuously as part of a parallel pumping system, there can be opportunities for significant energy savings. For example, lead and spare (or lag) pumps are frequently operated together when a single pump could meet process flow rate requirements. This can result from a common misconception—that operating two identical pumps in parallel doubles the flow rate.
Although parallel operation does increase the flow rate, it also causes greater fluid friction losses, results in a higher discharge pressure, reduces the flow rate provided by each pump, and alters the efficiency of each pump. In addition, more energy is required to transfer a given fluid volume.

A split-case centrifugal pump operates close to its BEP while providing a flow rate of 2,000 gallons per minute at a total head of 138 feet. The static head is 100 ft. The pump operates at an efficiency of 90% while pumping fluid with a specific gravity of 1. With a drive motor efficiency of 94%, the pumping plant requires 61.4 kW of input power.

When an identical parallel pump is switched on, the operating point of the composite system shifts to 2,500 gpm at 159 ft of head.

Each pump now operates at 80% efficiency while providing a capacity of 1,250 gpm. Although the fluid flow rate increases by only 25%, the electric power required by the pumping system increases by 62.2%:

P2 pumps = 0.746 kW/hp x (2,500 gpm x 159 ft) / (3,960 x 0.8 x 0.94) = 99.6 kW

For fluid transfer applications, it is helpful to examine the energy required per million gallons of fluid pumped.

When a single pump is operating, the energy intensity (EI) is as follows:

EI1 = 61.4 kW / (2,000 gpm x 60 minutes/hour x million gallons/106 gallons) = 512 kWh/million gallons

When both pumps are operating, the EI increases as follows:

EI2 = 99.6 kW / (2,500 gpm x 60 minutes/hour x million gallons/106 gallons) = 665 kWh/million gallons

When both pumps are operating in parallel, approximately 30% more energy is required to pump the same volume of fluid. The electrical demand charge (kW draw) increases by more than 62%. If the current practice or baseline energy consumption is the result of operating both pumps in parallel, pumping energy use will decrease by 23% if process requirements allow the plant to use a single pump.


The reason for this can be as below
  1. Due to incorrect piping layout for combined system. Generally the interconnection of the individual discharge headers to the common header is made perpendicular (TEE Connection). In such a case when the common header is not large enough to accomodate flow pulses / higher volumes then restriction in the discharge flow is very common for parallel pumps. This causes ~5 - 15% loss in system efficiency. Convert them to tangential entry to reduce entry losses & better ejecting effect to other pumps in the flow direction.

  2. Deviation of pump operation from BEP due to change in system curve after some time OR may be due to alteration made in the system which are not taken care of for changes & identification of BEP in the entire system vs pump curve. It may be due to addition of some other pump in the same system but at farther location which is often neglected as a parallel pump due to poor visibility.

  3. After few months/years of operation, generally in cooling water pumps, internal corrosion / pitting leads to variation in pump curve shifting it towards lower side of the original curve. This reduces the capacity & head of that particular pump. However, when this weared pump runs in parallel with a better pump, the better pump affects it in 2 ways - First its head is higher so it presses hard the other (damaged or weared) pump reducing its capacity further. - Second its capacity is already lower due to wear. This have double impact on overall performance. In some cases it may happen that weared pump contributes hardly 10-20%in flow consuming equivalent power.

  4. So the perception that we have two identical pumps which were working fine initially, will now also be working at best, is not good due to differential changes even in similar pumps lead to variations & they dont remain identical after some time of operation.

  5. If motor used is of different speed (which is not very common) OR even with same speed motors slip may be slightly different altering their speed in the range of 1%. This will also shift the curve of identical pumps leading to drop in efficiency. So we should check motors speed also.

  6. Use of VFD changes Pump curve by altering its speed in exponential manner. However, the system curve for many process applications, Circulating pumps, Filling pumps & specially for Boiler feed water pumps is different than exponentiality. In such cases VFD use may result in drop in overall system performance. So be careful in sugesting VFD in parallel pumping system.
All of them or anyone of them can be as severe as resulting in major loss in EI. The term Energy intensity as defined above is introduced for better understanding in terms of entire system. Our target should be to improve EI rather than improving only efficiency of the pump OR flow or Head.

More info next time........

Get Free Updates:
*Please click on the confirmation link sent in your Spam folder of Email*

Continue to read this post...

October 06, 2007

HydroPower - Is it clean or not?

Opponents of dams have long argued against putting barriers in the natural flow of a river. Dams, they point out, prevent endangered fish from migrating, alter ecosystems, and threaten the livelihoods of local communities.

Native Americans, fishing communities, and environmentalists have made these arguments in their quest to decommission four dams on Klamath River, which runs from southwest Oregon to the coast of California. But with California requiring a 25 percent reduction in the state's carbon dioxide emissions by 2020, clean energy has suddenly entered the Klamath dam debate.

However, replacing the power from these dams could result in adding combustion emissions to the environment.

Hydro-Québec, the world's biggest producer of hydropower, claims that "compared with other generating options, hydropower emits very little greenhouse gas," thus "contributing significantly to the fight against climate change."

Maybe not. Recent reports on methane emissions suggest that dams are anything but carbon-neutral.

According to recently published estimates from Brazil's National Institute for Space Research, the world's 52,000 largest dams release 104 million metric tons of methane annually. If these calculations are correct, then dams would account for about four percent of the total warming impact of human activities -- and would constitute the largest single source of human-related methane emissions.

If methane released from reservoir surfaces, spillways, and turbines were taken into account, India's greenhouse emissions could be as much as 40 percent higher than its current official estimates. But, India as a developing nation, is not required to cut emissions -- and has yet to measure methane from its 4,500 dams. And that's a problem, because while methane does not last as long in the atmosphere as carbon dioxide, its heat-trapping potential is 25 times stronger.

A Swirling Debate

In 2004, National Institute for Research in the Amazon suggested that a massive surge of methane emissions could occur when water is discharged under pressure at hydroelectric dams in a process known in the industry as "degassing."

The problem with dams is that organic matter gets trapped in them when land is first flooded, and more gets flushed in, or grows there, later on. In tropical zones, such as Brazil, this matter quickly decays to form methane and carbon dioxide.

But just how big a problem this creates is controversial. A debate has been raging for years between researchers connected to Hydro-Québec and Brazil's Electrobras, the world's largest hydropower companies, and several small teams of independent hydrologists.

According to Fearnside, if degassing emissions were factored in at several large hydropower plants in Brazil, then these dams would be larger contributors to global warming than their fossil fuel counterparts. To be precise, Fearnside suggested that during the first decade of its life, each of these dams would emit four times as much carbon as a fossil fuel plant that makes the same amount of electricity.

Fearnside's claims have triggered a firestorm. Luis Pinguelli Rosa, formerly of Electrobras but now based at the Federal University of Rio de Janeiro, claimed Fearnside had made "scientific errors," including a failure to grasp how degassing works, and so had exaggerated the emission levels.

Rosa pointed out that Fearnside had extrapolated his calculations from data taken from the Petit Saut dam in French Guyana in the years immediately following the creation of the reservoir, when organic matter, and thus methane emissions, would likely be their highest. Patrick McCully, executive director of the Berkeley, CA-based International Rivers Network, says that one of the areas of strongest disagreement among reservoir emissions researchers is how to quantify net emissions.

In a recent paper, "Fizzy Science," McCully shows that key factors influencing reservoir greenhouse gas emissions include fluctuations in water level, growth and decay of aquatic plants, decomposition of flooded biomass and soils, the amount of methane bubbling from the surface, and the amount of carbon dioxide diffusing in.

Get Free Updates:
*Please click on the confirmation link sent in your Spam folder of Email*

Continue to read this post...

Total Pageviews

Support Us

If you find this Blog useful Kindly take your time to donate some amount to keep it running.