Wildfires Banner Graphic

Home : Features : 2025 : Wildfire Response
Latest News
National Guard Members Continue LA Wildfire Response
January 21, 2025
U.S. Army Sgt. Bryce Carter, an infantryman with C Company, 1st Battalion, 160th Infantry Regiment, California Army National Guard, sharpens the blade of a hoe to clear brush and other debris as part of remediation efforts along the Mulholland Trail near Tarzana, California, in the aftermath of the Palisades Fire, Jan. 18, 2025. Carter and other members of his unit were assisting CALFIRE in mop-up efforts, which included clearing brush and backfilling firebreaks and other areas to prevent mudslides and reduce the impact of firefighting efforts.

California Guardsman Helps Battle Wildfires in His Community
January 16, 2025
Master Sgt. Alan Franklin, a commander's support Airman with the 146th Airlift Wing, speaks to 1st Lt. Aiden Flores about the Modular Airborne Fire Fighting System mission on the flightline at Channel Islands Air National Guard Station, Port Hueneme, California, Jan. 13, 2025. MAFFS aircraft from the Air National Guard’s 153rd Airlift Wing, Cheyenne, Wyoming, the 152nd Airlift Wing, Reno, Nevada, the 146th Airlift Wing, Port Hueneme, California, and Air Force Reserve Command’s 302 AW, Peterson Air Force Base, Colorado, are working together to combat fires in the Los Angeles area.

National Guard Bureau Chief Thanks Firefighting Guardsmen
January 14, 2025
Air Force Gen. Steve Nordhaus, chief, National Guard Bureau, and Army Senior Enlisted Advisor John Raines, SEA to the CNGB, visit National Guard members supporting wildland firefighting in Southern California, Channel Islands Air National Guard Station, Calif., Jan. 11, 2025. Thousands of National Guardsmen are involved in multiple air and ground firefighting in the Los Angeles Basin and Southern California.

Wyoming, Nevada Guard Aircrews Assist California Firefighters
January 13, 2025
U.S. Air Force Airmen assigned to the 153rd Airlift Wing load and install the Modular Airborne Fire Fighting Systems onto a C-130H Hercules aircraft in Cheyenne, Wyoming, Jan. 10, 2025, in preparation to support firefighting efforts in the Los Angeles area.

California, Nevada, Wyoming Guard Join Firefighting Battle
January 10, 2025
U.S. Air Force Airmen with the 129th Rescue Wing, California Air National Guard, at Moffett Air National Guard Base, Calif., prepare an HH-60G Pave Hawk helicopter to help battle the Palisades Fire Jan. 9, 2025.

 

Latest Photos
2025 Wildfire Response

 

Latest Videos
Video by Kevin D Schmidt
Dr. Yubei Chen
Air Force Research Laboratory
March 22, 2024 | 01:18:31
In this edition of QuEST, Dr. Yubei Chen discusses his work on Principles of Unsupervised Representation Learning

Key Moments in the video include:
Introduction to Dr. Chen’s lab, mentors, and collaborators
Current Machine Learning Paradigm
Natural intelligence learns with intrinsic objectives
Future machine learning paradigm and unsupervised representation learning
Defining Unsupervised representation learning
Supervision and similarities - spatial co-occurrence, temporal co-occurrence, Euclidean neighborhoods

Main points:
- derive unsupervised representation transform from neural and statistical principles
- simplification and unification of deep unsupervised learning
- the convergence
Neural principle: sparse coding
Statistical principle: manifold learning
Manifold learning, local linear embedding
Sparse manifold transform
Encoding of a natural video sequence
Recap of Main points

Audience questions:
On the three sources of similarity, do you think there is a way to map semantic similarities from crowdsourcing kinds of things like Concept Net?
Are there equivalencies here with cryo-EM analyses?
One of the things that made deep learning what it is their performance on ImageNet and AlexNet, right? Same thing with transformers and language translation, so how are you going to demonstrate that this impressive body of work is better than whatever state-of-the-art is out there? How are you going to demonstrate that it’s useful?
Follow-up: Is there a benchmark or standard data set, which you might produce, that establishes something about representation learning?
Co-occurrence is great for a lot of things but a poor choice for comparison when you have different dimensions for valuation you might want to do? Are you thinking about extending your ideas beyond things that are co-occurring or similar along one dimension and further away?
Is there any sort of procedure for pruning vestigial actions that are no longer necessary for the interpolated tasks that won’t just propagate down for future interpolations?
More