On-chip dielectrophoretic device for cancer cell manipulation: A numerical and artificial neural network study

In this section, the computational domain and the basic assumptions are first described, and then the method for solving the governing equations is presented.

A. Model configuration

Various model geometries have been developed over the years in the literature for cell manipulation purposes.35,36,4935. N. Piacentini, G. Mernier, R. Tornay, and P. Renaud, Biomicrofluidics 5, 034122 (2011). https://doi.org/10.1063/1.364004536. Y. Jia, Y. Ren, and H. Jiang, Electrophoresis 36, 1744 (2015). https://doi.org/10.1002/elps.20140056549. W. Liang, J. Liu, X. Yang, Q. Zhang, W. Yang, H. Zhang, and L. Liu, Microfluid. Nanofluidics 24, 26 (2020). https://doi.org/10.1007/s10404-020-2331-xFigure 2(b) shows the 3D schematics of the designed chip and the detailed measures of the channel. In fact, this geometry is inspired by the simpler microfluidic chip designed by Piacentini et al.3535. N. Piacentini, G. Mernier, R. Tornay, and P. Renaud, Biomicrofluidics 5, 034122 (2011). https://doi.org/10.1063/1.3640045 The effective length of the channel is equipped with seven pairs of electrodes facing each other. The electrodes are arranged in a manner that the applied ±V AC voltage produces a spatially alternating electric field. The device is designed in a way that all particles enter the chip from the side channels, and the sheath flow, which streams through the central inlet, pushes the nDEP particles toward the electrodes. The target cells leave the channel from the central outlet after the cell separation is achieved, and the others exit from the side ones. By solving the heat transfer equation, the temperature distribution in the entire domain is obtained. A threshold value of maximum temperature is specified for discerning the conditions that may harm the healthy cells.As mentioned before, the particle diameter is the most significant parameter in DEP force estimation. Furthermore, other factors, such as the particle's dielectric properties, are included based on the single-shell model. Table II represents the input values of the cells for numerical simulations.Table icon

TABLE II. Cell properties for numerical simulations.

Cell typedp (μm)ts (nm)σc(S/m)σs(S/m)εcεsρc(kg/m3)ReferencePLT1.880.2510−650610505050. I. Ertugrul and O. Ulkir, RSC Adv. 10, 33731 (2020). https://doi.org/10.1039/D0RA06271ERBC590.3110−6594.4410505151. E. O. Adekanmbi and S. K. Srivastava, Lab Chip 16, 2148 (2016). https://doi.org/10.1039/C6LC00355AWBC1040.6510−665610505252. A. Aghilinejad, M. Aghaamoo, X. Chen, and J. Xu, Electrophoresis 39, 869 (2018). https://doi.org/10.1002/elps.201700264 and 5353. M. Aghaamoo, A. Aghilinejad, X. Chen, and J. Xu, Electrophoresis 40, 1486 (2019). https://doi.org/10.1002/elps.201800459MCF-7207110−6507105054–5654. Q. Hu, R. P. Joshi, and A. Beskok, J. Appl. Phys. 106, 024701 (2009). https://doi.org/10.1063/1.317334455. E. A. Henslee, M. B. Sano, A. D. Rojas, E. M. Schmelz, and R. V. Davalos, Electrophoresis 32, 2523 (2011). https://doi.org/10.1002/elps.20110008156. S. Bhattacharya, T. C. Chao, N. Ariyasinghe, Y. Ruiz, D. Lake, R. Ros, and A. Ros, Anal. Bioanal. Chem. 406, 1855 (2014). https://doi.org/10.1007/s00216-013-7598-2According to Eq. (3), as the conductivity of the medium rises, the amount of heat dissipated from electrodes increases, leading to an increase in the medium temperature. Hence, DEP separation of living cells requires buffer solutions with low conductivities for the cell's viability concerns.34,5734. C. Vaillier, T. Honegger, F. Kermarrec, X. Gidrol, and D. Peyrade, PLoS One 9, e95231 (2014). https://doi.org/10.1371/journal.pone.009523157. G. Pucihar, T. Kotnik, M. Kandušer, and D. Miklavčič, Bioelectrochemistry 54, 107 (2001). https://doi.org/10.1016/S1567-5394(01)00117-7 A buffer solution of phosphate buffer saline (PBS) is selected. PBS is a commonly used non-toxic solution in many laboratories that prevents cell membrane rupturing owing to its isotonic behavior.5858. N. C. Martin, A. A. Pirie, L. V. Ford, C. L. Callaghan, K. McTurk, D. Lucy, and D. G. Scrimger, Sci. Justice 46, 179 (2006). https://doi.org/10.1016/S1355-0306(06)71591-X The properties of the PBS solution are summarized in Table III.Table icon

TABLE III. Fluid properties of the buffer medium.

Parameterσm(mS/m)εmρm(kg/m3)μm (mPa ⋅ s)km(W/m⋅K)cp,m(kJ/kg⋅K)ReferenceBuffer solution5580100010.75435, 5935. N. Piacentini, G. Mernier, R. Tornay, and P. Renaud, Biomicrofluidics 5, 034122 (2011). https://doi.org/10.1063/1.364004559. B. Han and J. C. Bischof, J. Biomech. Eng. 126, 196 (2004). https://doi.org/10.1115/1.1688778, and 6060. J. H. Choi and J. C. Bischof, Int. J. Heat Mass Transfer 51, 640 (2008). https://doi.org/10.1016/j.ijheatmasstransfer.2007.04.041Considering all the input parameters, the Clausius–Mossoti factor should be adjusted so that the cells become subjected to nDEP. The diagram of KC−M of various cells for a wide range of frequencies is shown in Fig. 2(c). According to this figure, the electric field frequency should be selected in a region that KC−M is negative. Therefore, the frequency of 100kHz is chosen for the entire simulation.

B. Numerical procedure

Occasionally, time-consuming trials and errors in experiments could be replaced by inexpensive numerical simulations. This paper investigates the DEP cell separation of four different cell types using commercial finite element method software, COMSOL Multiphysics 5.5. The governing equations are solved numerically in two steps:

Step 1: In this step, the Creeping Flow, Heat Transfer in Fluids, and Electric Currents modules in COMSOL are utilized to solve the steady-state form of Eqs. (1)(4) by the Nonisothermal Flow and Electromagnetic Heating couplings. The initial voltage, velocity, and temperature values of the solution are set to be 0V, 0m/s, and 25 °C, respectively.Step 2: Using the results from the previous step, the Particle Tracing for Fluid Flow module is utilized to calculate the imposed forces and the dynamic locations of the cells. The dielectrophoretic and drag forces and the particle properties are added to the module in order to solve Eqs. (5) and (10). The boundary conditions of both steps are shown in Fig. 2(d).

C. Artificial neural network model

The study of the entire working points for finding cell separation characteristics is costly and undesirable. There is a possibility to utilize the output data of a limited number of simulations to establish a prediction model with an artificial neural network (ANN). Herein, MATLAB® R2018b is used to develop an ANN model for value prediction. In this study, the input parameters of the numerical simulations are voltage and sheath flow rate, and the most paramount output parameters of the parametric study are the maximum temperature of the field, the separation time, and the focusing efficiency. The input parameters, along with the maximum temperature of the field, are trained to develop the ANN model. Figure 3 illustrates the configuration of the neurons of the feed-forward ANN model in different layers. Every solid connection between two neurons is associated with a weight. In this model, the loss function of Mean Squared Error (MSE) is used to determine the performance of the model, MSE=1N∑j=1N⁡(Yj−rj)2,(14)where N is the database size and Yj and rj are, respectively, network output and target values. Moreover, the activation function of the neurons in the hidden layer and the last layer are, respectively, set to be rectified linear unit (ReLU) and linear since such regression problems demand it. The ReLU function transfers positive values linearly while reassigning the negative values to zero.The input and output data of the simulations are randomly divided into three portions of the training, validation, and testing with the weights of 8:1:1. These data sets consist of the critical input and output parameters of the simulation. The critical parameters are determined during the parametric study using numerical simulations. The very first step for training a supervised regression problem is to normalize the input features of the model. When the features are not in the same scales, it is recommended to normalize the input features, so as to increase the speed and stability of the training process. Here, we use the Min–Max method for normalizing the input features through Eq. (15), X=x−xminxmax−xmin,(15)where X is the normalized array of input features of the neural network, x is the array of input features, and xmin and xmax are the minimum and maximum of the feature before normalization, respectively.A detailed graphical description of how to perform numerical simulations and form the artificial neural network model is illustrated in Fig. 4. To find the best neural network architecture while minimizing the biases in training, a K-fold cross-validation is implemented (K = 10). K-fold cross-validation includes randomly splitting the data into categories of training, validation, and testing for K times and evaluating the performance of the model on unseen data. The best structure is selected based on the average performance of the structure in different folds. Four hyperparameters of hidden layer neurons number, combination coefficient, regularization, and early stopping criterion are investigated, which led to the trial and error of 480 different permutations of network architectures. As shown in Fig. 3, the selected and developed model comprises an input layer, including two neurons of the electric field vector norm |E| and aggregate flow rate V˙, a hidden layer with ten neurons, and one output layer with one neuron of the maximum temperature. The optimum number of hidden neurons is selected out of the arrangements with 2, 5, 7, and 10 neurons. It should be noted that concerning the dataset size, since the model began to overfit for the number of neurons more than 10, these numbers of neurons are not considered. Also, increasing the number of hidden layers resulted in data overfitting. Between the values of 10−3, 1, and 100, the combination coefficient of 100 was more helpful. In order to improve the generalization of the model, it is recommended that the loss function be modified by adding a term containing the average of the sum of squares of the network weights and biases using a performance ratio (γ), Eq. (16). In this equation, n indicates the number of network weights and biases, MSEreg=(1−γ)MSE+γ(1n∑j=1n⁡(wj)2).(16)

The best performance is achieved with no regularization between performance ratios of 0 and 0.3. In addition, two early stopping criteria of 5 and 10 are examined, and the criterion of 10 performed better. To sum up, the configuration with one layer of 10 hidden neurons, a combination coefficient of 100, a regularization performance ratio of 0, and a stopping criterion of 10 resulted in the best performance with an average validation MSE of 0.0360 across all folds.

After defining the network architecture and data normalization, it is time to train the network. First, the weights are randomly initialized. Then, the weights are updated to minimize the network error function by the training algorithm, including the following steps for each training data: calculating the output of the network, calculating the error of the output neuron, calculating the error message of the output neuron, modifying the weights of the connections leading to the output layer, calculating the error message of the neurons of the previous layer, and modifying the weights of the connections leading to the previous layer through backpropagation. Finally, the stopping criterion is checked, and the next epoch begins if it is not satisfied. Meanwhile, at each epoch, the performance of the fitted model is checked by the unseen dataset of validation. Also, the testing dataset is utilized to evaluate the final fitted model. When the training error reaches the desired criteria, the training process ends, and the network is ready for use. Generally, there are several criteria for stopping the training process. One criterion is that the MSE of all data becomes less than a specific value, i.e., termination criterion. Another criterion is that the MSE of the validation dataset does not decrease for a certain number of consecutive epochs to avoid over-fitting, which is known as the early stopping criterion.

The training algorithm is chosen to be Levenberg–Marquardt, which is a combination of the gradient descent method and the Gauss–Newton algorithm and occupies rather more memory but less running time. This algorithm has shown the best performance for the current problem among various training algorithms during trials and errors. As stated before, the weights should be updated during training in order to minimize the error function. The error function is defined as the sum of squared errors of all training data and is calculated by the following equations: E(w)=12∑q=1Q⁡∑m=1M⁡(e(q,m))2,(17) e(q,m)=rm(q)−Ym(q),(18)in which w is the vector of weights and Q and M are the number of training data and outputs, respectively. In general, the weights are updated throughThe term Δwk is reckoned distinctly by different algorithms. The Levenberg–Marquardt algorithm calculated Δwk using the Jacobian matrix of errors (J), a positive constant named combination coefficient (μ), and error vector (e),6161. H. Yu and B. M. Wilamowski, in Intelligent Systems (CRC Press, 2016), pp. 1–16. Δwk=−((JkTJk)−μI)−1(Jkek).(20)

In this equation, I is the identity matrix. When μ is very small, the algorithm acts more similar to the Gauss–Newton algorithm, and when it is substantial, the gradient descent method is utilized.

Comments (0)

No login
gif