Save Article Instructions
Close 

Emergency Response: Lessons Learned during Recovery from a Fire in the Detroit Powerhouse

The response to a fire in the 100-MW Detroit powerhouse brought to light several inadequacies in data available and procedures followed during emergencies. The lessons learned during the recovery process are valuable for every hydro project owner.

By David M. Bardy, Thomas Voldbaek, Harley Grosvenor and David C. Shank

As a result of an electrical fire at the 100-MW Detroit hydro project, personnel with the Portland District of the U.S. Army Corps of Engineers learned many valuable lessons with regard to determining the root cause of the incident and formulating proper emergency response actions.

The fire, on June 18, 2007, was sustained by a 20-MW feed for about 2.5 minutes, resulting in the destruction of two 13.8-kV breakers, current limiting reactors and associated bus in the lower gallery of the powerhouse. (A sustained feed of this magnitude is equivalent to lifting a 3,000-pound car 140 miles in 2.5 minutes or driving an average-sized car more than 3,300 miles.) Heavy smoke and charged soot particles filled the powerhouse and led to a $25 million cleanup and modernization effort.

Background on the incident

The Portland District operates 13 dams in the Willamette River basin in Oregon. Each dam contributes to a water resource management system that provides flood risk management, power generation, water quality improvement, irrigation, fish and wildlife habitat and recreation on the river and many of its tributaries.

The Detroit - Big Cliff Project consists of the Detroit Dam and powerhouse and the Big Cliff Dam and 20-MW powerhouse, both on the North Santiam River. Detroit is about 40 miles southeast of Salem, Ore. Big Cliff is a remotely operated re-regulating project about 2.8 miles downstream of the Detroit project that controls river levels created by peak power generation releases from Detroit. These facilities began operating in 1953.

Electricity generated by both powerhouses is transferred to the Bonneville Power Administration at a BPA switchyard (Detroit Switchyard) at 230 kV. Big Cliff is interconnected to the Detroit powerhouse 13.8-kV station service system through a 13.8-kV transmission line. Before the fire, the switchyard was supplied with 13.8 kV of power from the powerhouse, as well as by an auxiliary aerial 13.8-kV source from the Detroit-Big Cliff interconnection. The Detroit project supplies station service power to the BPA substation. In 2004, new 13.8-kV station service switchgear breakers, surge arresters, capacitors and associated high-voltage cables were installed.

At the time of the incident at Detroit, a contractor was rewinding the Unit 1 generator onsite. With this unit out of service for the rewind, a comprehensive tagout was in place. Detroit’s Unit 2 and the single unit at Big Cliff were in service. A control room operator was on duty.

The station service system interconnects Detroit’s two 50-MVA main units and the Big Cliff 20-MVA unit with internal plant loads and step-up transformers serving the BPA switchyard. This is a three-wire ungrounded delta system, using buswork and metal clad switchgear. Beginning in April 2007, plant personnel observed a series of intermittent ground fault alarms, but none of them activated the generating unit ground fault relaying. Numerous efforts were made to track the problems, without success.

Just before the fire, an annunciation and protective circuit operation for a ground fault occurred on the 13.8-kV system. Circuit breakers XJ2, XJ3, XJ31 and XJ5 tripped, de-energizing most of the 13.8-kV bus and tripping Unit 2 at Detroit offline. The problem was later identified as a broken 15-kV insulator on the BPA substation auxiliary aerial power feeder on the Detroit-Big Cliff line.

About 90 minutes after this ground fault, the control room operator at Detroit re-energized the transmission system in an attempt to verify the continued presence of the ground fault. During this action, breaker XJ5 was closed. The surge arresters installed in 2004 failed catastrophically, leading to a switchgear fire that took the entire Detroit plant out of service. Because none of the generating units were operating at the time of the fire, the 13.8-kV bus was ungrounded. The ground fault was still present, leading to full voltage offset on the unfaulted phases. A board of investigation later determined the surge arresters and cables in the station service switchgear were rated only for line-to-neutral voltage. When subjected to 13.8 kV, they rapidly and catastrophically failed.

Circuit breakers XJ5 and XJ9, in adjacent cubicles, are two of the breakers tripped by a broken 15-kV insulator on the Detroit - Big Cliff line. When the control room operator later re-energized the line, surge arrestors catastrophically failed, starting a fire.
Circuit breakers XJ5 and XJ9, in adjacent cubicles, are two of the breakers tripped by a broken 15-kV insulator on the Detroit - Big Cliff line. When the control room operator later re-energized the line, surge arrestors catastrophically failed, starting a fire.

While not anticipating faulty equipment, designers had accounted for a potential electrical fault in this area. Four levels of protection had been provided: main unit ground, Big Cliff unit ground and 13.8-kV bus ground detection and main transformer differential protection.

Failure of the surge arresters during normal operation would have resulted in a line-to-ground fault, causing relay operation of all three ground detection systems. Two sets of arresters were installed in the fault area. Failure of either would have resulted in an operation of the transformer differential relay: the unit ground and transformer differential relays would have cleared the fault automatically by opening the XJ circuit breakers. The differential also would have opened the circuit breaker in the BPA switchyard.

The protective systems operated as intended during the initial fault. Two parts of the ground detection system were not available when the power was restored because the units were not placed back on line. The ground detection for the 13.8-kV bus operated and annunciated. When the second fault occurred, the ground annunciation remained active, but the operator did not take corrective action by manually tripping breakers. The fourth level of protection had been turned off to perform the Unit 1 generator rewind.

The fault escalated from the surge arrestor compartment to the circuit breaker XJ5 cubicle, then to the adjacent XJ9 cubicle and eventually through a short section of bus to the current limiting reactor. At that point, BPA switchyard relaying detected the fault and cleared the line.

Immediate response: June 18

The fire created a heavy smoke that filled the multi-level powerhouse, as well as the control room. Following policy, the plant operator and other crew members evacuated the powerhouse, accounted for all personnel, called 911 and notified management.

The fire department arrived and determined there was not an active fire. Because all personnel were accounted for, there was no need for a rescue operation and no need to enter the powerhouse.

The control operator maintained required downstream flows using spillway gates at Detroit and Big Cliff that are powered by emergency diesel generators, which started automatically when the station service power was lost. Some fluctuation in river flows was measured but caused no significant impact downstream.

Moving an aerosol can
Moving an aerosol can (at right) in the Detroit powerhouse reveals the soot containing charged metallic particles that adhered to almost every surface after the fire. Workers had to scrub or replace everything in order to remove the material.

Day 1: June 19

A team of engineers, safety specialists, first responders and support staff gathered in the morning. The Corps activated the Incident Command Structure, a standardized, on-scene, all-hazards incident management approach, which helped ensure an orderly response to the fire.

The incident commander identified team leaders to handle various aspects of the response: operations, logistics, public information, safety, water management (flow/fisheries) and technical. The ICS team established a standard operating procedure for meetings before work activities and at the close of each day.

Gaining access to the powerhouse was a top priority, but first the plant must be deemed safe because of the unknown nature of the smoke and soot. Maintaining water flow and monitoring fisheries concerns, such as dissolved gas levels, were also important to ensure the Corps’ environmental stewardship mission continued while addressing the results of the fire.

After discussions with the Marion County fire chief, the Corps agreed a fire crew would enter the powerhouse to assess its condition. A safety and orientation briefing was held, then fire investigators entered the building using self-contained breathing apparatus. They measured oxygen and carbon monoxide levels and identified the general location of the fire. Oxygen and carbon monoxide concentrations on the main generator floor and in the upper level offices were deemed adequate for entry with air-purifying respirators. But concentrations of carbon monoxide in the lower levels of the powerhouse were above acceptable limits and required SCBA for entry. Crew members using this apparatus entered the powerhouse to retrieve drawings, radios and keys to critical equipment, as well as to view control room gauges to assess their status.

The team discussed options for ventilating the lower levels of the plant. Exterior doors and roof hatches were opened. Because Unit 1 was unwatered for the rewind, it was possible to ventilate the lower levels using its penstock. A fan was placed at the top of the penstock to draw air through the open draft tube and scroll case access doors.

The Portland District’s contracting support group initiated several actions, including mobilizing a first-responder contractor to begin the preliminary phase of plant stabilization. The contractor needed the expertise, equipment and trained personnel to enter the plant, assess hazards, monitor air quality and collect soot samples. Northwest Firefighters was awarded this contract. Contracts also were awarded to provide portable toilets, wash facilities, potable water, lights, an office trailer and emergency engine generators.

The district’s safety personnel worked with Northwest Firefighters to re-assess the air in the powerhouse and, after several hours of ventilation, determined that carbon monoxide levels had decreased enough to allow trained responders to enter the lower levels of the plant. Project safety personnel and the contractor worked together to determine the appropriate type and level of personal protective equipment for the workers. At that time, PPE consisted of full-face respirators, Tyvek suits and proper procedures for decontamination.

The composition of the soot and suspended air particles was unknown. It was assumed that mercury and other heavy metals were present, coming from equipment inside the cubicles that had burned.

Investigators found that the powerhouse sump was filling with water. Fortunately, a submersible pump had been installed several years earlier. A portable 60-kW diesel generator was brought from another powerhouse to power to the pump. Despite the pump, the sump level continued to rise, coming within 1 foot of overflowing into the plant and flooding the lower galleries. Crews found that problems in the discharge piping were preventing the pump from discharging, and the situation was corrected.

It was also discovered that Unit 2 was spinning close to synchronous speed due to a loss of governor pressure and relaxing of the wicket gates. The crew closed the headgate to stop the flow of water, but after several hours the unit had not stopped spinning. This was probably due to leaky servo piston rings that allowed the oil to port back into the governor sump. This in turn caused the actuator tank to eventually lose pressure and overflowed the governor sump.

During an initial walkthrough of the Detroit powerhouse after the fire, workers in Tyvek suits and air-filtered masks check equipment gauges. These readings were used to set up a temporary control room outside the damaged powerhouse.
During an initial walkthrough of the Detroit powerhouse after the fire, workers in Tyvek suits and air-filtered masks check equipment gauges. These readings were used to set up a temporary control room outside the damaged powerhouse.

Day 2: June 20

The district commander and ICS team members arrived at the Detroit plant for a briefing and to establish a Board of Investigation to document conditions and determine root cause(s) of the incident. Sabotage had not been ruled out, and it and other possible causes needed to be explored.

The limited diesel generator capacity challenged efforts to manage available power to run lights, the drainage sump pump and the motor-generator set for station battery charging. Attempts to provide power through the station service board failed. It was clear that power would continue to be provided by extension cords to various pieces of equipment, independent of the station bus.

Work focused again on Unit 2. The actuator tank was isolated to force the gates closed after running temporary power from the diesel generator to the governor pumps. This stopped the unit, and the manual servo locks were engaged. Personnel discovered that the lower limit switch was not set correctly for the headgate, which was off of the sill by about 1 foot. That was corrected on June 20, but the continued rotation of the unit up to that point caused the bearings to overheat.

As work continued, safety and health personnel updated safety plans to include decontamination procedures, proper site entry procedures, work/rest cycles and appropriate PPE. They sent samples of the soot covering all surfaces in the powerhouse for testing to determine its contents, to ensure staff were using the correct level of PPE.

While the plant was being secured, other team members were discussing and reviewing water management issues. The North Santiam River provides habitat for a variety of fish and wildlife, and concerns about downstream water temperature and total dissolved gas levels as a result of water flow changes needed to be addressed. In coordination with tribal, state and federal agencies, changes were implemented to mitigate these concerns and to monitor and offset impacts. For example, different outlets in the dam were used to try and affect water temperatures and dissolved gas for fish, and personnel were deployed to monitor downstream habitat.

Additional teams focused on maintaining the rest of the facility, including the diesel generator and spillway gates, and operating Big Cliff Dam. Logistics plans for staffing, fuel deliveries, downstream monitoring and setting up a temporary control room were implemented.

By 5 p.m. on June 20, the powerhouse was secured and efforts focused on cleanup, improving working conditions (including lights and decontamination) and providing temporary power to critical systems. Teams began addressing longer-term fixes as well, which are discussed below.

Recovery phase

During the next several weeks, work transitioned from emergency response to assessment of equipment condition and planning for restoration of the plant. Several items needed to be addressed to return the plant to working order:

It became clear that work needed to restore the powerhouse exceeded the capability of the original emergency response contractor. Within five weeks, a second contract was awarded to Shaw Environmental to finish the plant decontamination and cleanup. Decontamination efforts started in July and lasted until September 11. Contract employees hand-wiped and/or hand-washed all interior building surfaces, equipment, control boards and many system components. Rather than going through a significant decontamination effort, the contractor removed and disposed of HVAC ducts and equipment, contaminated office furniture and sound-deadening walls. This was done either because it was less expensive or safer to replace these items. Special cleaning solvents were used on electrical equipment to prevent tracking that could lead to faults.

Designs and plans also were developed to restore portions of the station service bus and build temporary buswork that could tie into the feed from the Big Cliff powerhouse. This required fabricating bus and switchgear, as well as protective relays, that would provide a dual source for station power. Power could then be fed either from Big Cliff or the BPA line through the main transformers. Power to the Detroit plant was restored on September 18.

The generator for Unit 1 at Detroit had to be completely disassembled to remove metallic soot from the windings, bearing surfaces and other generator systems. National Electric Coil, the rewind contractor, used an asbestos and lead abatement contractor to perform this task.

During the recovery period, resource management personnel with the Corps were working to obtain funding for the work. Direct funding of about $2 million was provided by BPA to continue the recovery work. Several existing sub-agreements were amended to cover additional work that totaled about $13 million. In addition, a fire restoration agreement for about $10 million was funded, made possible with a mix of both capital and expense funding.

Once the station service bus was restored, work could begin to restore or upgrade the rest of the plant. This included modernizing the 13.8-kV switchgear, upgrading the station service bus and station battery, continuing the Unit 1 generator rewind and accelerating a contract to improve the HVAC and fire protection systems (also in progress when the fire started).

The plant fire protection and HVAC system contract, awarded to Metal Benders Inc., addressed several key items identified during the fire, including: smoke barriers, control room pressurization, automatic fire doors, fire and smoke alarms, fire panels, powerhouse segmentation to limit the ability of smoke to move through the plant, smoke detectors and other systems to protect personnel. As a result, Detroit’s system is one of the most comprehensive approaches to fire protection that has been installed in any Corps plant. It will serve as a model for other Corps facilities.

Recovery work continues, although the majority of the work was completed in 2010. The Big Cliff unit was restored in September 2007 after the temporary bus was installed at Detroit. Units 1 and 2 at Detroit were returned to service in April 2008 and March 2009, respectively, after completing rewinds of both units. The modernization efforts to the station bus, switchgear and fire protection systems have made the Detroit and Big Cliff powerhouses two of the Corps’ most modern plants.

Lessons learned

As a result of the fire at the Detroit powerhouse and the recovery efforts that followed, the Corps learned many lessons other project owners can apply.

Root cause

Update drawings. In many cases, drawings did not reflect actual field conditions. This led operations crews to inadvertently turn off protective systems when clearing out other equipment.

Improve checkout and commissioning procedures. During previous work, equipment had been installed that did not meet specifications or standards. When subjected to the initial ground fault conditions, this equipment failed, initiating the fire. The Corps has strengthened its commissioning and testing requirements.

Improve troubleshooting procedures. Per-sonnel did not fully understand the inadvisability of energizing the medium-voltage system after a ground fault protective relay action. Procedures to test before energization need to be clearly understood.

Provide training and operations and maintenance manuals for new equipment. Plant operators and crew did not have adequate training in operation of the new equipment. Old manuals were not updated with regard to the new equipment and thus were of little use.

Re-evaluate the protection scheme. Although the original design provided adequate protection, enhancements from newer technologies were not implemented. Reviewing all powerhouse protective schemes on a regular basis is important.

Response actions

Ensure access to drawings and keys to critical equipment. It was important to have access to drawings for critical equipment and also have special tools and keys to operate this equipment. Some of these tools and keys were in the control room and unavailable when the plant was evacuated.

Hire knowledgeable, experienced safety personnel. The safety personnel allowed access to the plant and kept workers safe. Without their knowledge, efforts to enter the plant would have been risky and could have led to long-term liability. Working conditions were difficult. Fatigue and overheating were significant issues. Plans developed by the safety personnel addressed this.

Keep portable generators and power cords on hand. Having access to portable power sources and the means to distribute the power was a key element.

Establish an incident command structure. This structure allowed the Corps to maintain order during a chaotic event. It helped to bring in the correct resources and keep them focused on specific areas, with the knowledge that other areas were being appropriately addressed. It allowed a single person to oversee a large, complex operation.

Communication is vital. Having good communication during the initial response was important. The use of cell phones was particularly important, as there was no access to land lines.

Provide access to basic facilities/human needs. Having a group of people focused on providing site logistics and taking care of basic needs was important. It allowed workers and responders to focus on their responsibilities.

Determine environmental consequences. It was important to not only focus on the powerhouse but also on the effects to water flow. Having a team focus on downstream water quality, dissolved gas and temperatures helped to mitigate for the effects of the loss of generation at both plants.

Develop evacuation procedures. Having proper evacuation planning and drills is critical to employee safety. It was fortunate that the fire occurred at night, when the majority of staff was off-duty and contractor forces were not on site. Had this event occurred when personnel were in the lower galleries, injuries and possible fatalities would have been much more likely.

Test the governor. When the governor loses power for extended periods of time, it is important to test the ability of the hydraulic system to keep wicket gates closed. While this event rarely occurs, it can lead to relaxing of the gates and potential loss of control of the unit.

Perform periodic maintenance. Verification of headgate full closure should be part of periodic maintenance before an emergency operation. Never assume it works, unless it has been verified.

Include emergency training. Training and coordination of emergency plans with staff and contractors is vital and should be included during emergency evacuation drills.

Train on emergency equipment use. Emergency pumps and piping installations need to be documented and all staff trained on proper operations.


David Bardy, P.E., is chief of the technical and contracts section for the Willamette Valley Project, Portland District, U.S. Army Corps of Engineers. Thomas Voldbaek is the maintenance manager for the Willamette Valley Project. Harley Grosvenor, P.E., is a project manager with the Portland District of the Corps. David Shank, P.E., is a senior electrical engineer for the U.S. Army Corps of Engineers’ Hydroelectric Design Center, which provides engineering support for the Corps hydro system.

More HR Current Issue Articles
More HR Archives Issue Articles


To access this Article, go to:
http://www.hydroworld.com/content/hydro/en/articles/hr/print/volume-30/issue-7/articles/emergency-response-lessons-learned-during-recovery-from-a-fire-in-the-detroit-powerhouse.html