Tutorial 2: ERP Analysis#
Corresponding file for this tutorial: TMSiPlugins/PsychoPy/example_erp_analysis.py
This tutorial covers the second phase of an ERP experiment, i.e. data analysis, and is a step-by-step guide on running the example_erp_analysis.py located inside the TMSiPlugins/PsychoPy folder.
This example script is provided to import and analyze EEG data recorded (in .poly5 format) during an auditory odd-ball ERP experiment. To try-out the script, sample_data_erp_experiment.poly5 has to be downloaded (available on TMSi’s Python examples page).
Before reading this tutorial, it’s highly recommended to have a basic knowledge about electroencephalography (EEG), TMSi Python interface, and TMSi amplifiers. More information about Event Related Potentials (ERPs) can be found on TMSi’s website. To read more about the data collection and stimulus presentation phase, please refer to Tutorial ERP experiment. Before getting started, please ensure that you have Python 3.8 installed on your PC.
Adding the correct directory of files#
To use the TMSi Python interface folder, which is necessary for running the examples, your Python interpreter needs to know the corresponding path. Therefore, the first lines of code add this information. If you wish to store the recordings in a different folder than the default one, you can change the directory specified in line 39.
35import sys
36from os.path import join, dirname, realpath
37Plugin_dir = dirname(realpath(__file__)) # directory of this file
38modules_dir = join(Plugin_dir, '..', '..') # directory with all modules
39measurements_dir = join(Plugin_dir, '../../measurements') # directory with all measurements
40sys.path.append(modules_dir)
Importing the required classes and functions#
The next lines import all the libraries which are required for the example script to work. They are used for purposes such as performing an independent component analysis (ICA), plotting data, and creating a simple graphical user interface.
41import mne
42from IPython import get_ipython
43import matplotlib.pyplot as plt
44import numpy as np
45from TMSiFileFormats.file_readers import Poly5Reader
46import tkinter as tk
47from tkinter import filedialog
48from mne.preprocessing import ICA
49import easygui
50from mne.preprocessing import peak_finder
51import pandas as pd
Making plots interactive#
These two lines of code are only relevant for Spyder users. If a different integrated development environment (IDE) is used to run the script, these lines have to be silenced.
53ipython = get_ipython()
54ipython.magic("matplotlib qt")
Setting the device type#
Due to differences in the number of channels and trigger options, the device which was used to run the ERP experiment must be specified. ‘APEX’ or ‘SAGA’ are the two possible options (line 57). Furthermore, the trigger values which were assigned to the target and non-target stimulus during the ERP experiment need to be set by the user. For APEX, these values are stored in the STATUS channel.
56#%% Set variables by user
57device_type = 'APEX'
58
59# Event values
60target_value = 16 # Stored value in STATUS channel for target stimulus
61nontarget_value = 2 # Stored value in STATUS channel for nontarget stimulus
Defining a flag variable#
Users can choose between a full analysis including additional plots or a short version in which only essential plots are displayed. This is done by defining a flag variable which is an input from the user. The purpose of the following lines of code is to display a dialog box for the user to choose the desired version. Based on the user’s choice, the flag variable is set to True or False.
63full_version = easygui.buttonbox("Would you like to run the full version?", choices=["Yes","No"])
64if full_version == "Yes":
65 a_flag=True
66elif full_version == "No":
67 a_flag=False
68else:
69 a_flag=False
Importing an ERP measurement#
Although this example script is written to import .Poly5 files, it’s possible to modify the following block of code in case the ERP experiment was stored in .xdf or .edf format (for more information please check the example_file_reader.py located inside the examples_reading_data folder).
Next, the imported data is converted to an mne object (line 81). This is because the mne library provides many useful signal analysis and visualization options which are used in the next stages of this example script. Lines 83 to 86 are written to inform the user in case no file or a measurement file in a different format was selected.
77 if filename.lower().endswith('poly5'):
78 data = Poly5Reader(filename)
79
80 # Conversion to MNE raw array
81 mne_object = data.read_data_MNE()
82
83 elif not filename:
84 tk.messagebox.showerror(title='No file selected', message = 'No data file selected.')
85 else:
86 tk.messagebox.showerror(title='Could not open file', message = 'File format not supported. Could not open file.')
87
Accessing the EEG data from MNE object#
The four variables assigned in lines 94 to 97 are essential for analyzing ERPs. Please note, two channels (CP6 and POz) are marked as ‘bads’ (lines 89 to 91) as they were disconnected when ‘sample_data_erp_experiment.poly5’ was recorded. Following the same logic, it’s possible to exclude disconnected channels when importing any other data file.
The next step is to plot the raw EEG data and power spectral density (PSD) values (lines 105 to 110). These plots provide the first insight into the quality of the measurement and amount of mains interference. Please note, these plots only appear if the user chose for the full analysis version. In this step, a for-loop is defined to exclude the channels that do not contain signals (lines 100 to 103).
88 if mne_object:
89 if filename[-32:] == "sample_data_erp_experiment.poly5":
90 # Channels CP6 and POz were not connected in this file.
91 mne_object.info['bads'].extend(['CP6', 'POz'])
92
93 # Retrieve the MNE RawArray info, channel names, sample data, and sampling frequency [Hz]
94 info_mne = mne_object.info
95 ch_names = mne_object.info['ch_names']
96 samples_mne = mne_object._data
97 fs = data.sample_rate
98
99 # Do not show disconnected channels
100 show_chs = []
101 for idx, ch in enumerate(mne_object._data):
102 if ch.any():
103 show_chs = np.hstack((show_chs,mne_object.info['ch_names'][idx]))
104 data_object = mne_object.copy().pick(show_chs)
105 if a_flag: # To display raw EEG time series and PSD plot of raw EEG signals.
106 fig_psd, ax_psd = plt.subplots()
107 raw_data_psd = data_object.compute_psd(method='welch', fmin=0.1, fmax=50)
108 raw_data_psd.plot(average=True, picks="eeg", exclude="bads", show=False, axes=ax_psd)
109 ax_psd.set_title("Average PSD plot of raw EEG data of all non-zero channels.")
110 data_object.plot(scalings=dict(eeg = 250e-6), start= 0, duration = 5, n_channels = 5, title = filename, block = True)
Defining channel types#
Depending on which TMSi device was used and how many channels were connected during the ERP experiment, channel types need to be manually specified. This is because the .Poly5 file does not contain this information. The code in lines 114 to 153 ensures the correct channel types are set in the mne_object. The comments are self-explanatory.
114# Poly5 dataformat cannot store the channeltypes. Therefore, the researcher should manually
115# enter the types of data each channel holds. Channels are always ordered in the same way:
116 # First channel is common reference, if enabled ('misc' channel)
117 # all UNI channels that are enabled ('eeg' channels by default)
118 # BIP channels (channel type depends on research)
119 # TRIGGER channel ('misc' channel)
120 # Status & counter channel ('misc' channels)
121
122# Define channels in use (1 = enabled, 0 = disabled)
123CREF_used = 0 # If measured with a common reference, set to 1, otherwise 0
124BIP1_used = 0 # If BIP channel 1 is used, set to 1, otherwise 0
125BIP2_used = 0 # If BIP channel 2 is used, set to 1, otherwise 0
126BIP3_used = 0 # If BIP channel 3 is used, set to 1, otherwise 0
127BIP4_used = 0 # If BIP channel 4 is used, set to 1, otherwise 0
128TRIGGER_used = 1 # If TRIGGER channel was enabled, set to 1, otherwise 0
129
130# Fill out the number of EEG channels
131number_eeg = 32
132
133# Channel type for undefined channels
134# Possible channel options include 'eog', 'eeg', 'ecg', 'emg', 'misc'
135BIP1_type = 'eog' # Signal type for BIP channel 1
136BIP2_type = 'eog' # Signal type for BIP channel 2
137BIP3_type = 'ecg' # Signal type for BIP channel 3
138BIP4_type = 'emg' # Signal type for BIP channel 4
139
140
141# Define a variable for MNE that includes all channel types and all channels that are used in the correct order
142if device_type == 'SAGA':
143 channel_types = CREF_used * ['misc'] + number_eeg * ['eeg'] + BIP1_used * [BIP1_type] + BIP2_used * [BIP2_type] + BIP3_used * [BIP3_type] + BIP4_used * [BIP4_type] + TRIGGER_used * ['misc'] + 2 * ['misc']
144elif device_type == 'APEX':
145 channel_types = number_eeg * ['eeg'] + TRIGGER_used * ['misc'] + 4 * ['misc']
146
147# Add montage & electrode location information to the MNE object
148info_mne.set_montage('standard_1020')
149# Create the new object
150mne_object = mne.io.RawArray(mne_object._data, info_mne)
151
152# Create a mask to identify EEG channels
153eeg_mask = [ch_type == 'eeg' for ch_type in channel_types]
Pre-processing#
Pre-processing includes re-referencing the EEG data, selecting a subset of channels, and filtering the EEG data.
Re-referencing; it is common practice to re-reference the EEG data to the average of signals recorded from both mastoids (channel names M1 and M2). This can be done by calling the set_eeg_reference method of the mne_object (line 156). Similarly, it’s possible to select any other (sub) set of channels as reference channels.
Selection of channels; in this example a subset of 9 channels is selected for simplicity (line 159). If the full analysis version is chosen by the user, PSD values for the unfiltered and re-referenced EEG data will be plotted for the selected subset of channels (line 162 to 168).
Filtering EEG data; a band-pass filter is applied to the EEG data. This is done to remove the DC offset and drifts and the high-frequency components from the EEG signal. Filter parameters can be specified by the user (lines 170 to 175).
155# Re-referencing to the mastoids
156EEGraw_reref = mne_object.set_eeg_reference(ref_channels=['M1', 'M2'])
157
158#%% Preprocessing
159selected_channels = ['Fz','F3','F4','Pz', 'P3', 'P4','Cz','C3','C4']
160reref_psd = EEGraw_reref.compute_psd(method='welch', fmin=0.1, fmax=50)
161# Show unfiltered data
162if a_flag:
163 EEGraw_reref.plot(scalings=dict(eeg = 100e-6),title="Re-refrenced, unfiltered EEG")
164 #PSD plot of re-referenced unflitered EEG. Only for selected channels.
165 fig_psd2, ax_psd2 = plt.subplots()
166 reref_psd.plot(average=False, picks=selected_channels, exclude="bads", show=False, axes=ax_psd2)
167 ax_psd2.set_title("PSD plot of re-referenced unfiltered EEG for selected channels.")
168 plt.show()
169
170# Filter variables
171f_l = 0.5; # Lower frequency of band-pass (thus, high-pass filter)
172f_h = 35 # Higher frequency of band-pass (thus, low-pass filter)
173
174# Filter the data (high pass + lowpass)
175preprocessed_data= EEGraw_reref.copy().filter(l_freq=f_l, h_freq=f_h,method='iir',phase = 'zero-double')
Applying independent component analysis#
To remove eye movement artifacts from EEG data, an ICA is applied to pre-processed data. For more information about ICA please refer to our blog.
First, a copy of pre-processed data is created (line 178). This is done to have both the pre-processed data and the ICA-applied data available. The number of independent components used during fitting are to be specified by the user (line 179).
Please note if an integer is given it must be within the following range: 1 < number_components < number of channels in pre-processed data
Next, ICA fitting is performed to run the ICA decomposition on pre-processed data (lines 180 and 181).
As visualizing the independent components is essential for selecting the undesired components, two plots appear on the screen: a scalp dipole topography map, and time series of all components (lines 184 and 185).
When analyzing the sample data provided by TMSi (‘sample_data_erp_experiment.poly5’), ICA component 1 and ICA component 2 contain eye movements and should be excluded. This can be done by selecting those components for exclusion in an interactive window (please check lines 189 to 202) which appears on the screen.
Finally, ICA is applied (line 205) to remove eye movement artifacts from the pre-processed EEG data.
177#%% Applying ICA to remove ocular artefacts
178ICAapplied_data = preprocessed_data.copy()
179number_components = 20
180ica = ICA(n_components=number_components, max_iter='auto', random_state=97)
181ica.fit(ICAapplied_data)
182
183# Plot ICA sources and components to analyse them by hand
184ica.plot_sources(ICAapplied_data)
185ica.plot_components()
186
187# Based on visual inspection, manually exclude components to remove EOG artefacts
188# Display a dialog for component exclusion
189msg = "Select ICA components to exclude"
190choices = [f"ICA component {i}" for i in range(number_components)]
191selected_choices = easygui.multchoicebox(msg, "ICA Component Exclusion", choices)
192
193if selected_choices is None:
194 excluded_components = [] # No components selected
195else:
196 excluded_components = [int(choice.split(" ")[-1]) for choice in selected_choices if choice.startswith("ICA component")]
197
198if excluded_components:
199 print(f"Excluded ICA components: {excluded_components}")
200 ica.exclude = excluded_components
201else:
202 print("No ICA components selected for exclusion.")
203
204# Reconstruct original signals with artefacts removed
205ica.apply(ICAapplied_data)
Retrieving epochs of data#
In an ERP experiment, it’s essential to analyze the EEG data within a certain time window that includes stimulus presentation. In this example, epochs of -200 ms to 800 ms (stimulus presentation at 0 ms) are retrieved. Depending on the type of TMSi amplifier, stimuli presentation moments are stored in either ‘TRIGGER’ or ‘STATUS’ channel (given as input by the user in line 211). In lines 210 to 212 a for-loop is used to find the index of the abovementioned channel.
The ‘STATUS’ channel of APEX contains a baseline value of 32 (please check chapter 5.5 of the user manual for more information), which needs to be removed (line 215) so that the target and non-target stimuli match the way ERP data was collected. To define epochs of EEG data, it’s crucial to identify the moments at which stimuli were presented to the participant (in other words, events). This is done using the find_event function from the mne library (line 220). Please note, the name of the channel containing events and the start/end of events are to be set by users (via the stim_channel and output parameter in line 219). Possibilities for output parameter are ‘onset’ or ‘offset’ or ‘step’.
Once events are identified, epochs are defined for both target and non-target stimuli using the Epochs class from the mne library (line 228). Please note, the time window before and after stimulus presentation is determined by the user via the tmin and tmax parameters. For more information about all other parameters please visit the documentation page of mne. Finally, if the full analysis version is selected, a plot of all epochs found in EEG data appears on the screen. The scaling of this plot can be set by the user (line 232).
207#%% Retrieving epoched data
208
209# Find the index of the TRIGGER channel (for APEX, change to STATUS to find stored triggers)
210for i in range(len(mne_object.ch_names)):
211 if mne_object.ch_names[i] == 'STATUS':
212 trigger_chan_num = i
213
214# Find the samples of the TRIGGER channel
215trigger_chan_data = samples_mne[trigger_chan_num] -32 # removing baseline
216trigger_chan_data = [0 if value == 30 else value for value in trigger_chan_data] # remove synchronisation issue
217
218# Find events in the data
219ICAapplied_data._data[trigger_chan_num] = trigger_chan_data
220events = mne.find_events(ICAapplied_data, stim_channel = 'STATUS', output = 'onset')
221
222# Assign target and non-target value
223event_dict = {'target stimulus': target_value, 'non-target stimulus': nontarget_value}
224
225# Epoch data based on events found (-200 ms to +800 ms)
226epochs_nobaseline_noICA = mne.Epochs(preprocessed_data, events, event_id=event_dict, tmin=-0.2, tmax=0.8,baseline=None,
227 preload=True) # only needed for counting ocular artefacts per epoch
228epochs = mne.Epochs(ICAapplied_data, events, event_id=event_dict, tmin=-0.2, tmax=0.8,baseline=(-0.2, 0),
229 preload=True)
230
231if a_flag:
232 mne.viz.plot_epochs(epochs, title='Individual Epochs', scalings=dict(eeg = 30e-6))
Plotting ERPs#
At this step, ERPs are plotted for both target and non-target stimuli. For each channel, the average of EEG data over all the specified epochs is plotted (line 237-238). A topographic map of ERPs for all EEG channels will appear on the screen (line 241). Please note, the topographic map is interactive. Thus, it is possible to click on each channel’s plot and see more details.
234#%% Plot ERPs
235
236# Average epochs per channel
237erp_target = epochs['target stimulus'].average()
238erp_nontarget = epochs['non-target stimulus'].average()
239
240# Plot the results for both events per location of electrode
241mne.viz.plot_evoked_topo([erp_target, erp_nontarget],scalings = None, title = "Topological overview target and nontarget")
Plotting the average ERPs over trials for selected channels#
To provide a comparison between ERPs recorded after presentation of target versus non-target stimuli, these two plots are generated for the selected subset of channels. The code below (lines 247 to 263) creates a figure containing two subplots. Please note that y-axis limits are the same for both subplots.
244# Get the channel indices corresponding to the selected channel names
245channel_indices = [epochs.info['ch_names'].index(ch) for ch in selected_channels]
246
247# Plot the average ERPS for the selected channels of interest
248fig, axs = plt.subplots(2, 1) # figure with two subplots
249fig.suptitle('Average over trials for selected channels', fontsize=16)
250
251# Target stimulus in the first subplot average of trials
252axs[0].grid(True)
253erp_target.plot(spatial_colors=True,scalings = None, picks=channel_indices, axes=axs[0],show=False)
254axs[0].set_title('Average ERP - Target Stimulus')
255axs[0].set_ylabel('Amplitude (μV)')
256axs[0].set_ylim(-15,30)
257# Non-target stimulus in the second subplot average of trials
258axs[1].grid(True)
259erp_nontarget.plot(spatial_colors=True,scalings = None, picks=channel_indices, axes=axs[1],show=False)
260axs[1].set_title('Average ERP - Non-Target Stimulus')
261axs[1].set_ylabel('Amplitude (μV)')
262axs[1].set_ylim(-15,30)
263fig.set_tight_layout(True)
Plotting the differences between target and non-target responses#
If the full analysis version is selected by the user, the difference between target and non-target ERPs are plotted (line 272 to 278).
272# #%% Subtracting nontarget ERP from target ERP
273erp_diff = mne.combine_evoked([erp_target, erp_nontarget], weights=[1,-1])
274if a_flag: # plots the difference between target and non-target values
275 fig_diff, ax_diff = plt.subplots()
276 erp_diff.plot(spatial_colors=True,scalings = None, picks=channel_indices,show=False, axes=ax_diff)
277 ax_diff.set_title("Non-target values are subtracted from target values.")
278 ax_diff.grid()
Finding peak values#
The code in lines 281 to 345 is written to detect the peak values of the ERPs for the selected subset of channels, as it’s needed to calculate the P300 amplitude. The peaks are detected within a specific time range (lines 283 to 286) after stimulus presentation.
281#%% Finding P300 peak
282
283# Define the time range of interest in seconds
284start_time = 0.25
285end_time = 0.5
286start_baseline = -0.2
287stop_baseline = 0
288
289# Initialize lists to store peak information for each channel and epoch
290P300_indx = [[] for _ in selected_channels] # P300 index
291P300_mags = [[] for _ in selected_channels] # P300 magnitude
292P300_latency = [[] for _ in selected_channels] # P300 latency
293P300_amplitude = [[] for _ in selected_channels] # P300 amplitude
294P300_meanvoltage = [[] for _ in selected_channels] # P300 mean voltage
295
296# Loop over all channels
297for channel_idx,channel_name in enumerate(selected_channels):
298 # Loop over all epochs
299 for epoch_idx in range(len(erp_diff.data)):
300 epoch_peak_amplitudes = [] # Initialize a list for this epoch's peak amplitudes
301
302 # Select epoch data for the channel
303 data = epochs[epoch_idx].pick([channel_name])
304 data = data.get_data(units = 'uV')[0,0]
305
306 # Select the time range of interest
307 time_indices = np.where((epochs.times >= start_time) & (epochs.times <= end_time)) # selecting time indices of time range
308 channel_data = data[time_indices] # selecting data points in time range
309 time_data = epochs.times[time_indices] # selecting time data points in time range
310
311 # Select time range of pre-stimulus baseline
312 time_idx_base = np.where((epochs.times >= start_baseline) & (epochs.times <= stop_baseline))
313 baseline_data = data[time_idx_base]
314
315 # Compute the mean voltage
316 mean_voltage_baseline = np.mean(baseline_data) # mean voltage pre-stimulus baseline
317 mean_voltage_timerange = np.mean(channel_data) # mean voltage P300 time range
318
319 # Find peaks
320 locs, mags = peak_finder(channel_data, extrema=1)
321
322 # Determine the highest peak
323 highest_peak_mags = max(mags) # magnitude
324 highest_peak_loc = locs[np.argmax(mags)] # index
325 highest_peak_lat = time_data[highest_peak_loc]*1000 # latency in ms
326 highest_peak_amp = highest_peak_mags - mean_voltage_baseline
327
328 # Removing end-point detected "peaks"
329 if highest_peak_lat == 500:
330 highest_peak_lat = 0
331 highest_peak_amp = 0
332 if highest_peak_lat == 250:
333 highest_peak_lat = 0
334 highest_peak_amp = 0
335
336 # Append the locations and magnitudes for this epoch to the respective lists
337 P300_indx[channel_idx].append(highest_peak_loc)
338 P300_mags[channel_idx].append(highest_peak_mags)
339
340 P300_latency[channel_idx].append(highest_peak_lat)
341 P300_amplitude[channel_idx].append(highest_peak_amp)
342 P300_meanvoltage[channel_idx].append(mean_voltage_timerange)
343
344 # Save all peak data
345 epoch_peak_amplitudes.append(locs) # Store the peak amplitude and time for this channel
Latency, amplitude and mean voltage calculations#
Lines 347 to 420 provide a table including mean and standard deviation values for the P300 amplitudes, latency, and mean voltage, which are characteristics of the P300 response. To get more information about these quality measures, please refer to our blog.
347#%% Quality measures - mean + std P300 latency, amplitude, mean voltage
348# Initialize lists for mean and standard deviation
349mean_P300_latency = []
350std_P300_latency = []
351mean_P300_amplitude = []
352std_P300_amplitude = []
353mean_P300_meanvoltage = []
354std_P300_meanvoltage = []
355
356for latency_list, amplitude_list, meanvoltage_list in zip(P300_latency, P300_amplitude, P300_meanvoltage):
357 # Convert the lists to NumPy arrays for calculations
358 latency_array = np.array(latency_list)
359 amplitude_array = np.array(amplitude_list)
360 meanvoltage_array = np.array(meanvoltage_list)
361
362 # Calculate mean and standard deviation for latency
363 mean_latency = np.mean(latency_array)
364 std_latency = np.std(latency_array)
365
366 # Calculate mean and standard deviation for amplitude
367 mean_amplitude = np.mean(amplitude_array)
368 std_amplitude = np.std(amplitude_array)
369
370 # Calculate mean and standard deviation for mean voltage
371 mean_meanvoltage = np.mean(meanvoltage_array)
372 std_meanvoltage = np.std(meanvoltage_array)
373
374 # Append the results to the respective lists
375 mean_P300_latency.append(mean_latency)
376 std_P300_latency.append(std_latency)
377 mean_P300_amplitude.append(mean_amplitude)
378 std_P300_amplitude.append(std_amplitude)
379 mean_P300_meanvoltage.append(mean_meanvoltage)
380 std_P300_meanvoltage.append(std_meanvoltage)
381
382
383# Plotting mean, std for P300 latency, amplitude, and mean voltage
384num_channels = len(selected_channels)
385categories = ['Latency', 'Amplitude', 'Mean Voltage']
386channel_labels = selected_channels
387fig2, axs2 = plt.subplots(nrows=len(categories), ncols=1, figsize=(10, 6))
388
389for i, category in enumerate(categories):
390 ax = axs2[i]
391 mean_values = [mean_P300_latency, mean_P300_amplitude, mean_P300_meanvoltage][i]
392 std_values = [std_P300_latency, std_P300_amplitude, std_P300_meanvoltage][i]
393 x = np.arange(num_channels)
394
395 ax.errorbar(x, mean_values, yerr=std_values, fmt='o', markersize=6, alpha=0.7)
396 ax.set_title(f'{category} by Channel')
397 ax.set_xticks(x)
398 ax.set_xticklabels(channel_labels)
399 ax.grid(axis='y', linestyle='--', alpha=0.6)
400
401axs2[0].set_ylabel('Latency (ms)')
402axs2[1].set_ylabel('Amplitude (μV)')
403axs2[2].set_ylabel('Mean Voltage (μV)')
404
405fig2.set_tight_layout(True)
406plt.show()
407
408# Create empty DataFrame
409data = {'Category': [], 'Channel': [], 'Mean': [], 'Std': []}
410df = pd.DataFrame(data)
411
412# Populate the DataFrame with mean and std values
413for i, category in enumerate(categories):
414 for j, channel_name in enumerate(channel_labels):
415 mean = [mean_P300_latency, mean_P300_amplitude, mean_P300_meanvoltage][i][j]
416 std = [std_P300_latency, std_P300_amplitude, std_P300_meanvoltage][i][j]
417 df = df.append({'Category': category, 'Channel': channel_name, 'Mean': mean, 'Std': std}, ignore_index=True)
418
419# Display the DataFrame
420print(df)