GENERALIZED ARTIFICIAL NEURAL NETWORK MODELLING AND ITS APPLICATION IN PERFORMANCE PREDICTION OF SUSTAINED RELEASE MONOLITHIC TABLETS
HTML Full TextGENERALIZED ARTIFICIAL NEURAL NETWORK MODELLING AND ITS APPLICATION IN PERFORMANCE PREDICTION OF SUSTAINED RELEASE MONOLITHIC TABLETS
Tulsi H. Vyas * and Girish N. Patel
Department of Pharmaceutics, Shree S. K. Patel College of Pharmaceutical Education & Research, Ganpat University, Ganpat Vidyanagar, Mehsana-Gozaria Highway, Mehsana, Gujarat, India.
ABSTRACT: Artificial Intelligence is the simulation of human intelligence. From delivering simple groceries to doorsteps to solving the toughest task in scientists’ lab, it is surrounding human life in all the means. So how can the Pharma industry be untouched in the case of AI?! Artificial Neural Network (ANN) is a type of AI used to solve non-linear problems and predict the output for given input parameters from the training values. In this research work, such generalized ANN is developed to predict drug release from the sustained-release monolithic tablet. It is trained by the backpropagation method under supervised learning. This developed model is evaluated on the basis of RMSE, similarity and dissimilarity factors and can predict the output with the best achieved average error ~0.0095 and R2 0.9953. Such ANNs can be the best combination of experience and intelligence, which can eliminate tedious lab works that can be cost-effective and time-effective.
Keywords: Artificial Neural Network (ANN), backpropagation method, Supervised Learning, Input Feature Selection, Monolithic Tablet, RMSE
INTRODUCTION: Artificial Intelligence is an area of computer science dedicated to improvise and simplify our routine and difficult life hacks as well as definitely bringing some revolutionary changes in various fields. AI has entered to almost all fields, so Pharma field 1. Continuous development of new pharmaceutical formulation besides regular troubleshooting in the existed formulation is a very crucial task for pharmaceutical industries. The performance of pharmaceutical products relies upon multiple factors, and it is not possible to predict product performance in complex formulation development.
One has to rely on empirical outcomes to understand the product performance along with experience of decades to select appropriate ingredients along with processing conditions to, even, find starting step of right pathway to develop successful formulation. Traditionally, formulators use empirical method or statistical methods. However, such statistical methods can help in case of screening only and can mislead in the case of complex formulation development. For example, in case of numbers of formulation affecting factors more than five, very profound numbers of experiments are required to be performed 2, 3, 4. So it becomes important to work in a smarter way by combining experience of ages and today’s smart technology. Even ANN is becoming very handy in current pandemic condition from recognizing pattern of virus spread to predicting COVID report by pattern recognition and also effect of this pandemic on economy of world 5, 6, 7.
In such cases AI can be a helping hand. By using AI some models can be developed which actually mimic the biological brain and such models are called Artificial Neural Networks (ANN). They simulate the brain, learn, solve problems and draw conclusions. According to Dr. Robert Hecht-Nielsen, the inventor of one of the first neurocomputers “ANN is a computing system made up of number of simple, highly interconnected processing elements, which process information by their dynamic state response to external output” 8. In Pharma field ANN model can be used at various stages of formulation and development of controlled release matrix tablet-like optimization of formulations and manufacturing processes 9, 10, 11.
What is ANN?? There are problem categories that cannot be formulated as an algorithm. Problems that depend on many subtle factors, for example the purchase price of a real estate which our brain can (approximately) calculate. Without an algorithm a computer cannot do the same. Therefore the question to be asked is: How do we learn to explore such problems? Exactly – we learn; a capability computer does not have. Humans have a brain that can learn. Computers have some processing units and memory. They allow the computer to perform the most complex numerical calculations in a very short time, but they are not adaptive all the times 12.
Artificial neural networks (ANNs) technology is a group of computer methods for modelling and pattern recognition, functioning similarly to the neurons of the brain. It is a computational system inspired by the Structure Processing Method Learning Ability of a biological brain. In the brain, inputs are received by biological neurons from external resources, they are combined (performing a non-linear operation) and then a decision is made based on the final results. There are many types of neural networks exists but all are having same basic principle i.e. to receive input, process them and execute the output 13. ANNs are a type of “mathematical model” that simulates the biological nervous system and draws on analogues of adaptive biological neurons. Table 1 shows terminologies comparison between Biological Neural Network (BNN) and artificial Neural Network (ANN). A major advantage of ANNs compared to statistical modelling is that they do not require rigidly structured experimental designs and can map functions using historical or incomplete data.
TABLE 1: COMPARISON OF TERMINOLOGIES BETWEEN BNN AND ANN
Biological Terminology | Artificial Neural Network Terminology |
Neuron | Node/ Unit/ Neuron |
Synapse | Connection/ Edge/ Link |
Synaptic Efficiency | Connection Strength/ Weight |
Firing Frequency | Node Output |
ANNs are known to be a powerful tool to simulate various non-linear systems and have been applied to numerous problems of considerable complexity in many fields, including engineering, psychology, medicinal chemistry, and pharmaceutical research. They are good recognizers of patterns and robust classifiers, with the ability to generate when making decision based on imprecise input data 17, 14.
General Applications of ANN: 15, 16, 18, 19
- Pattern Classification Applications
- Speech Recognition and Speech Synthesis
- Classification of radar/sonar signals
- Remote Sensing and image classification
- Handwritten character/digits Recognition
- ECG/EEG/EMG Filtering/Classification
- Credit card application screening
- Data mining, Information retrieval
- Control, Time series, Estimation
- Machine Control/Robot manipulation
- Financial/ Scientific/ Engineering Time series forecasting.
- Inverse modelling of vocal tract
- Optimization
- Travelling sales person
- Multiprocessor scheduling and task assignment
- Real World Application Examples
- Real Estate appraisal
- Credit scoring
- Geochemical modelling
- Hospital patient stay length prediction
- Breast cancer cell image classification
- Jury summoning prediction
- Precision direct mailing
- Natural gas price prediction
- In drug discovery: Quantitative Structure-Activity Relationship (QSAR), Quantitative Structure Toxicity Relationship (QSTR), Virtual Screening (VS)
Applications of ANN in Pharmaceutical Product and Process Development: 20, 21, 22
- In the modeling and optimization of pharmaceutical formulations
- In minimization of the capping tendency of tableting process optimization.
- In the prediction of the in-vitro permeability of drugs
- Optimizing emulsion formulation
- Determination of factors controlling the particle size of nanoparticle.
- ANN in tablet manufacturing.
- Investigation of the effects of process variables on derived properties of spray-dried solid dispersion.
- Quantitative structure Property relationship and Molecular Modeling.
- Molecular de novo design and combinatorial libraries.
- Validation of pharmaceutical processes.
- Modeling the response surface in HPLC
- Structure Retention Relationships in Chromatography.
Artificial Neural Network Structure: As biologically inspired computational model, ANN is capable of simulating neurological processing ability of the human brain. An average human brain contains about 100 billion neurons, with each neuron being connected with 1000-10,000 connections to others 23.
A single neuron consists of three major parts Fig. 1
FIG. 1: A BIOLOGICAL AND AN ARTIFICIAL NEURON. (Via https://www.quora.com/What-is-the-differences-between-artificial-neural-network-computer-science-and-biological-neural-network)
- Dendrites (fine branched out threads)- carrying signals into the cell
- The cell body- receiving and processing the information
- The axon (a single longer extension) - carries the signal away and relays it to the dendrites of the next neuron or receptor of a target cell. The signals are conducted in an all-or-none fashion through the cells.
The arrangement of neurons to form layers and the connection pattern formed within and between layers is called network architecture.
Simulating BNN there are 3 layers in ANN which are as follows 24
- Input Layer: It contains those units (artificial neuron) which receive input from the outside world on which network will learn, recognize, or otherwise process.
- Hidden layer: These units are in between the input and output layer. The job of the hidden layer is to transform the input into something that the output unit can use in some way. The hidden layer may be different for different types of networks.
- Output layer: It contains units that respond to the information about how it learns any task. An output layer depends on the outcome of the problem. The hidden layer then links to an output layer receives connections from hidden layers. It returns an output value that corresponds to the prediction of the response variable. The active nodes of the output layer combine and change the data to produce the output values. Fig. 2 shows the basic architecture of ANN.
FIG. 2: ARCHITECTURE OF ANN
Weight and Activation Function: Weight is a parameter of the network that transforms input data within the hidden layer. The training mode of model begins with arbitrary values of the weights - they might be random numbers – and proceeds iteratively. Each iteration of the complete training set is called an epoch. In each single epoch the network adjusts the weights in the direction so that to reduces the error. As the iterative process of incremental adjustment continues, the weights gradually converge to the locally optimal set of values. Many epochs are usually required before training is completed. So, generally, weights are parameters selected by the network itself to reduce error while learning. Activation functions are mathematical equations that determine the output of a neural network. This function is attached to each neuron in the network and determines if it should be “fired” or not depending on whether neuron’s input is relevant for model’s prediction. There are different types of activation functions like Sigmoid, Hyperbolic Tangent, Softmax, Softsign, Rectified Linear Unit, Exponential Linear Unit, etc. Types of ANN models can be classified into various categories based on different parameters 25, 26, 27, which are shown in Table 2:
TABLE 2: TYPES OF ANN MODEL BASED ON VARIOUS PARAMETERS
Parameter | Types |
Based on their function | Prediction Neural Network / Nonadaptive Network
Clustering Neural Network / Feature Extracting Network Association Neural Network |
Based on nature of weights | Fixed, Adaptive |
Based on learning | Feed forward, Recurrent |
Based on Memory unit | Static, Dynamic |
Based on development of networks | Single layer, Multi-Layer |
Miscellaneous | Hopfield network ,Stochastic neural network ,Modular neural network, Radial basis function neural network, Kohonen self-organizing neural network, Convolutional neural networks, Boltzmann machine network, Long Short-Term Memory Units (LSTMs) |
How does a Model “learn”? The learning process is human intelligence. This ability permits us to acquire various skills and expertise in numerous fields with reference to changing environments.
Our reactions rather say outputs in different -different conditions are totally based on some previous experiences or inputs.
So, implementing these learning capabilities in machines and predicting the outputs by them is the central goal of Artificial intelligence. Based on the topology, the connection of ANN could be feedback and feed-forward 28, 29.
Feedback or Recurrent ANN Model: There are cycles in the connections. The feedback model first decreases error between predicted output and real output, and after that, it gives the final output.
In such ANN models, each time an input is presented, the ANN model must iterate for a potentially long time before it produces a response.
Feedback ANN models are usually more difficult to train than feed-forward ANN models. Here the network learns by Backpropagation or Delta rule.
Feedforward ANN Model or Acyclic Network: the connections between the nodes do not form cycles. Feedforward network works on the bases of randomly assigned weigh values and apply the activation function and gives the output 30.
FIG. 3: FEEDBACK AND FEED-FORWARD MODELS
METHOD:
ANN Model Development: 31, 32 In this research work model is developed to predict drug release from SR monolithic tablet by using backpropagation supervised learning method which not possible by other simple statistical methods 33. ANN model can learn the latent relationship between the causal factors (formulation variables) and response (in-vitro release characteristics) 34. Artificial model development includes a number of operations like training, validation or testing. Such operations are as follows stepwise:
FIG. 4: HIERARCHY OF ANN MODEL DEVELOPMENT
As per hierarchy, we can select the type of network on the basis of the problem we want to solve. The present study aims to develop ANN model which a formulator can be used in the prediction of the SR tablet performance. Because of which there will be the elimination of course trial and error methods and even it can be useful instead of other statistical methods where there is number of runs that have to be performed because of a large number of dependent factors.
Data Gathering: Formulators have to develop their own data set to develop their own ANN model 35. Model will be trained by data on principle of generalization 36. Present study involves retrieval and compilation of data over experiments and granted patents pertaining to pharmaceutical formulation. This huge pool of data will be utilized for development of ANN model. Compilation of data from granted patents will vanish the need to perform number of experiments. Granted patents provide authenticated data, per se. However, data will be validated randomly by developing formulation from a collected data set. For these data set, data were developed and collected having selection criteria like Sustain release Monolithic tablets. From the total of 101 data collected. These formulation data contain characteristics of drugs and excipients like mol. Wt of the drug, log P and solubility of the drug as well as factors which can affect the dissolution profile of the formulation like pH of dissolution medium, USP apparatus number, RPM, drug to polymer ratio, total weight of tablet, etc. This data will be used as input nodes for the network, and the network will predict the performance of SR monolithic. Tablets in the form of time required to release 10%, 50%, and 80% release of the drug from formulations as output nodes 37, 38. Data should be selected as such as it does not overfit or underfit the model.
Data Splitting: Generally 3 data set (training, validation, testing) splitting technique is used. Here we have used the software JUSTNN version 4.0b. This requires 2 set of data i.e. training and testing. From these dataset itself the software will learn for its validation and for tuning hyper-parameters to fit the best model with least error. The accuracy of the test set shows the prediction ability of unknown data and this strategy is widely adopted in machine learning 39, 40. So, here 79 data were used for training and remaining was kept as testing dataset.
IFS: In various formulation problems, a large range of variables are available to train the network, but it is very much hard to define which of them are most relevant or useful 41. This situation can be more confusing when there is interdependence or correlations between any of the variable. ANN can be used to rank which of the various formulation and processing variables are most critical in influencing the output parameter of interest because of its unique ability in spotting pattern in data. So the network is designed by considering the input feature selection. Input feature Selection is generally used to cop up with large number of irrelevant input features which may confuse the network unnecessarily during learning. The objective of IFS are manifold, the important ones being (1) to avoid overfitting and improve model performance, (2) to provide faster and more cost effective models, (3) to gain a deeper insight into the underlying process of data generation.
The software itself contains the feature of IFS on bases of which we can perform this task. Software suggested importance of inputs in this manner: Polymer 1 viscosity > Drug: Polymer 2 ratio > Drug Amount > Log P> Polymer 3 viscosity > Mol. Wt of drug > USP Apparatus > Tab wt > RPM > Drug: Polymer 1 ratio > pH of medium > Polymer 2 viscosity > Water solubility of drug > pKa > Drug: Polymer 3 ratio Following “remove one at a time” strategy and evaluating each model on bases of RMSE.
Comparison between actual outcome and predicted outcome is compared by means of root mean square error (RMSE). More the error is the dissimilarity between two results. So poor and less accurate the prediction is. So our target should be to decrease the RMSE 42, 43. RMSE can be calculated by using following equation:
RMSE = √ (1 / N∑ (Predicted – Observed)2)
Where “PREDICTED” is the predicted value from the models, “OBSERVED” is the observed value from the experiments, and “N” is the total number of test cases.
Training and Optimization of Learning Variables: The training dataset contains total of 79 data which is been trained with different learning variables. Choosing the correct variables like learning rate and momentum will help weight adjustment. Setting right learning rate could be the biggest task, if learning rate is too small algorithm might take long time to converge 44. On the other hand, choosing large learning rate could have opposite effect, algorithm could diverge also the large values of momentum will influence the adjustment in the current weight to move in same direction.
This ANN network contained 1 hidden layer with 3 hidden nodes. The finalized Network after IFS was trained for further learning variables. Variables like Learning rate and Momentum were studied at 3 different levels. Targeted error was set below 0.01 within 10% of range of given validation data. So total of 3 models were trained for 3 different values.
Evaluation Criteria: Generally, in machine learning, correlation coefficient of determination are usually adopted as evaluation metrics for regression problems. The correlation coefficient generally indicates a linear relationship between 2 variables and gives the correlation between predicted and observed value. However, this cannot be that useful in the prediction of pharmaceutical product performance. In pharmaceutics, a good dissolution profile prediction model should have less than 10% error 45. Following USFDA credibility or we can say accuracy of final model can be evaluated on basis of similarity (f2) and dissimilarity factors (f1) 46, 47, 48.
The f1 factor (eq.1) calculates the percent difference between the two dissolution profiles at each time point and is a measurement of relative error between the two profiles:
f1 = (∑nt = 1 [Rt – Tt] / ∑nt = 1 Rt) * 100........1
where n is the number of time points, Rt is the mean dissolution value for the reference product at time t and Tt is the mean dissolution value for the test product at that same time point. The f1 value is equal to zero when the test and reference profiles are identical and increases as the two profiles become less similar.
The f2 factor (eq.2) is a logarithmic reciprocal square root transformation of the sum of squared error and is a measurement of the similarity in the percent dissolution between the two profiles. The f2 value is equal to 100 when the test and reference profiles are identical and exponentially decrease as the two profiles becomes less similar.
f2 = 50 *log10 [100 /1+ (∑nt = 1(Rt –Tt 02 / /n....2
Where Rt and Tt are the percent (%) drug got into solution at every time point for the reference and test product, respectively.
According to the guidelines issued by regulatory authorities f1 values upto 15 (0-15) and f2 values greater than 50 (50-100) ensures the “sameness” or “equivalence “of the two profiles. Values less than 50 may be acceptable if justified.
Testing: Finalized Model was tested for its accuracy by using remained Testing dataset. The model ultimately predicts the value, which can be compared in form of f1 and f2 which must have to fulfill guidelines of regulatory authorities 49, 50.
RESULTS AND DISCUSSION: Deep learning requires a huge number of the dataset from which it structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decision on its own.
TABLE 3: COMPARISON OF NETWORKS FOR IFS ON BASES OF RMSE
Observed value | Predicted value | RMSE | |
Network A
(Without removal of any input) |
1.36 | 2.2806 | 1.338144 |
5.18 | 5.6155 | ||
7.49 | 9.572 | ||
Network B
(Removal of D:P3 ratio) |
1.36 | 1.6672 | 0.630322 |
5.18 | 4.6353 | ||
7.49 | 8.3849 | ||
Network C
(Removal of D:P3 ratio+ pKa) |
1.36 | 1.6995 | 0.604847 |
5.18 | 4.6116 | ||
7.49 | 8.3019 | ||
Network D
(Removal of D:P3 ratio+ pKa+ solubility of drug) |
1.36 | 0.7903 | 1.338549 |
5.18 | 5.4318 | ||
7.49 | 9.7232 |
Here total of 101 data used to train and test the model. Inputs were optimised by using IFS method. From the table 3 it can be concluded that up to removal of 2 features RMSE is decreasing but further removal leads to increase in RMSE suggesting the previous one as a best fit. So the finalized model which is selected for training is having 13 input nodes and 3 output nodes.
TABLE 4: VARIABLE OPTIMIZATION OF FINAL NETWORK C
Variables | Network C1 | Network C2 | Network C3 |
Learning rate | 0.01 | 2.5 | 5 |
Momentum | 0.05 | 0.6 | 0.9 |
Epochs | 480801 | 9038600 | 114201 |
Target error | 0.01 | 0.01 | 0.01 |
Average error | 0.009504 | 0.013350 | 0.2413 |
RMSE | 0.5062 | 1.3367 | 2.6335 |
After performing IFS method, final model C was further studied for different learning variables for optimization as shown in Table 4.
From these 3 models, Network C1 was optimized on the bases of RMSE as it is having the least RMSE among all, indicating the best fit model, which would be further go for testing and evaluation.
This strategy of similarity and dissimilarity was adopted because it would be useful to paramount the predictive ability of the trained network and to verify whether the network could be used to speculate unseen data within the dataset. Below Table 5 shows the comparison of predicted and observed values and their evaluation in the form of f1 and f2 by using the remaining 22 test datasets.
TABLE 5: TESTING OF ANN ON BASES OF f1 AND f2
S. no. | Observed Value | Predicted Value | f1 | f2 |
1 | 2.9106 | 2.1558 | 2.888 | 96.85 |
11.672 | 12.6761 | |||
20.3986 | 20.4187 | |||
2 | 5.821 | 5.7879 | 2.36 | 98.58 |
9.875 | 10.5044 | |||
15.987 | 16.1385 | |||
3 | 1.8543 | 1.8736 | 6.768 | 98.24 |
6.7413 | 7.4654 | |||
10.832 | 10.7662 | |||
4 | 1.5221 | 1.88 | 6.447 | 97.56 |
6.6992 | 7.488 | |||
10.7212 | 10.7957 | |||
5 | 0.5341 | 0.4394 | 0.149 | 99.96 |
5.4998 | 5.5201 | |||
10.412 | 10.4619 | |||
6 | 0.4672 | 0.1645 | 2.854 | 98.68 |
5.9843 | 5.4987 | |||
12.5647 | 12.8103 | |||
7 | 0.7549 | 0.2833 | 1.463 | 99.11 |
6.2876 | 6.3092 | |||
10.98 | 11.1662 | |||
8 | 0.2656 | 0.1519 | 3.349 | 97.65 |
8.6951 | 8.7006 | |||
12.9875 | 13.8308 | |||
9 | 2.3111 | 2.4388 | 1.662 | 99.51 |
6.6782 | 6.5611 | |||
10.3168 | 9.9853 | |||
10 | 0.7065 | 0.6377 | 1.374 | 99.89 |
5.778 | 5.778 | |||
10.256 | 10.0948 | |||
11 | 0.4311 | 0.3229 | 5.645 | 98.03 |
5.9753 | 6.3312 | |||
9.9842 | 10.6617 | |||
12 | 0.3468 | 0.287 | 1.728 | 97.34 |
6.11 | 5.3704 | |||
9.003 | 9.5353 | |||
13 | 1.0178 | 0.1864 | 4.754 | 97.45 |
8.6431 | 8.3403 | |||
16.4569 | 16.3494 | |||
14 | 0.2543 | 0.1992 | 0.072 | 100.97 |
8.0543 | 8.7134 | |||
12.4511 | 13.8401 | |||
15 | 1.0322 | 0.153 | 3.333 | 97.5 |
8.6733 | 8.7134 | |||
13.7842 | 13.8401 | |||
16 | 0.6742 | 0.1869 | 1.742 | 99.15 |
8.9765 | 9.0108 | |||
11.5235 | 11.6078 | |||
17 | 3.8648 | 3.8806 | 0.743 | 99.93 |
8.6835 | 8.7424 | |||
14.5883 | 14.7149 | |||
18 | 7.1298 | 7.2526 | 1.429 | 99.2 |
14.8975 | 15.3526 | |||
23.7854 | 23.8623 | |||
19 | 0.9923 | 1.8356 | 2.4774 | 97.38 |
6.2113 | 5.9159 | |||
9.3145 | 9.1753 | |||
20 | 0.6322 | 0.4742 | 1.75 | 98.69 |
4.993 | 5.5753 | |||
10.1734 | 10.0255 | |||
21 | 6.7611 | 7.2985 | 4.826 | 94.9 |
14.953 | 15.4364 | |||
22.8639 | 23.9944 | |||
22 | 0.6419 | 0.255 | 11.56 | 95.97 |
5.7124 | 4.6392 | |||
8.1648 | 7.9467 |
Satisfying regulatory norms, values of testing data are falling within range i.e. f1 is between 0-15 and f2 is within 50-100 except one which ensures the “sameness” or “equivalence “of the two profiles with average error ~0.0095. A regression plot was constructed for predicted value and the observed value of drug releases at various sampling points using the test dataset to obtain a squared correlation coefficient (R2) and slope.
FIG. 5: REGRESSION PLOT OF ANN MODEL FOR PREDICTED AND OBSERVED VALUES
The ANN model developed above that must yield a regression plot with a slope and R2 both being closest to value 1.0 then it is considered as the optimum model. Considering these 2 criteria i.e. f1 and f2 and squared correlation coefficient (R2) this developed model can be said satisfying in norms of accuracy and regulatory guidelines.
CONCLUSION: In this work, a generalized artificial neural network is successfully developed for drug release prediction of SR monolithic tablet. The networks were rigorously trained and optimized for various variables and also tested for enough data to exhibit reliable prediction behaviour with best achieved average error ~0.0095 and R2 0.9953. It also satisfactorily fulfils USFDA guidelines for comparison of 2 dissolution profiles adding acceptability credits to our model. A lengthy and tedious work like Pharmaceutical formulation development can be simplified by using various statistical methods, but it can be furthermore lightened by using today’s smart methods like ANN development. A once-developed model can further be used to predict the product performance eliminating the requirement of tedious physical practicals. ANN is the perfect combination of experience of ages and intelligence of present, which must be explored more and more in the pharmaceutical world.
ACKNOWLEDGEMENT: Authors are grateful to Shree S. K. Patel College of pharmaceutical education & research, Ganpat University, whose cooperation made this study possible.
We also acknowledge Laksh Finechem Pvt. Ltd., Alembic Pharmaceuticals Ltd, Granules India Limited, and Gattefosse India Pvt. Ltd for providing gratis samples of drugs for the study.
CONFLICTS OF INTEREST: None
REFERENCES:
- Briganti G and Le Moine O: Artificial Intelligence in Medicine: Today and Tomorrow. Front. Med. 2020; 7: 27.
- Chen MY, Fan MH, Chen YL and Wei HM: Design of experiments on neural network's parameters optimization for time series forecasting in stock markets. Neural Network World 2013; 23: 369.
- Jacques B, Heinz S, Peter van H and Hans L: Basic Concepts of Artificial Neural Networks (ANN) Modeling in the Application to Pharmaceutical Development. Pharmaceutical Development and Technology 1997; 2: 2: 95-109
- Srikantha V: Modeling and optimization of developed cocoa beans extractor parameters using box behnken design and artificial neural network. Computer and electronics in Agriculture 2020; 117.
- Jena PR, Majhi R, Kalli R, Managi S and Majhi B: Impact of COVID-19 on GDP of major economies: Application of the artificial neural network forecaster. Economic Analysis and Policy 2021; 69: 324-39.
- Toraman S, Alakus T and Turkoglu I: Convolutional capsnet: A novel artificial neural network approach to detect COVID-19 disease from X-ray images using capsule networks. Chaos, Solitons and Fractals 2020; 140: 110-22.
- Tamang SK: Forecasting of Covid-19 cases based on prediction using artificial neural network curve fitting technique Global J. Environ. Sci. Manage 2020 6(4).
- Li EY: Artificial neural networks and their business applications. Information & Management 1994; 27: 303-13.
- Barmpalexis P: Artificial neural networks in the optimization of a nimodipine controlled release tablet formulation. European Journal of Pharmaceutics and Biopharmaceutics 2010; 74: 316-23.
- Vijaykumar S, Anastasia G, Prabodh S and Deepak B: Artificial Neural Network in Drug Delivery and Pharmaceutical Research. The Open Bioinformatics Journal 2013; 7(Suppl-1): 49-62.
- Ebube NK, Owusu-Ababio G and Adeyeye CM: Preformulation studies and characterization of the physicochemical properties of amorphous polymers using artificial neural networks. Int J Pharm 2000; 196: 27-35.
- Basheer IA and Hajmeer M: Artificial neural networks: fundamentals, computing, design, and application. Journal of microbiological methods 2000; 43: 3-1.
- Haykin S: Neural Networks - A Comprehensive Foundation, Macmillan, 1994.
- Krenker A, Bester J and Kos A: Introduction to the artificial neural networks. Artificial Neural Networks: Methodological Advances and Biomedical Applications. InTech 2011; 1: 1-8.
- Yu HH: Introduction to ANN & Fuzzy Systems. University of Wisconsin – Madison Dept. Electrical & Computer Engineering 2001.
- Wang: Application of Artificial neural Network model in diagnosis of Alzheimer’s disease. BMC Neurology 2019; 19: 154
- Nassif AB, Shahin I, Attili I, Azzeh M and Shaalan K: Speech recognition using deep neural networks: A systematic review. IEEE Access 2019; 7: 19143-165.
- Mossalam A and Arafa M: Using artificial neural networks (ANN) in projects monitoring dashboards’ formulation, HBRC Journal 2018, 14(3): 385-92.
- Mandlik V, Bejugam PR and Singh S: Application of artificial neural networks in modern drug discovery. In: Artificial Neural Network for Drug Design, Delivery and Disposition. Academic Press 2016; 123-39.
- Kustrin S and Beresford R: Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research. J Pharm Biomed Anal 2000; 22(5): 717-27.
- Peh KK, Lim CP, Quek SS and Khoh KH: Use of artificial neural networks to predict drug dissolution profiles and evaluation of network performance using similarity factor. Pharm Res 2000; 17(11): 1384-88.
- Takayama K, Takahara J, Fujikawa M, Ichikawa H and Nagai T: Formula optimization based on artificial neural networks in transdermal drug delivery. J Control Release 1999; 62(1-2): 161-70.
- Larsen J: Introduction to Artificial Neural Networks. Section for Digital Signal Processing Department of Mathematical Modelling Technical Uni of Denmark 1999.
- Michael ML, Iain C and Owen IC: The use of artificial neural networks for the selection of the most appropriate formulation and processing variables in order to predict the in-vitro dissolution of sustained release mini tablets. AAPS PharmSciTech 2003; 4(2): 1-12.
- Zupan J: Introduction to artificial neural network (ANN) methods: what they are and how to use them. Acta Chimica Slovenica 1994; 41: 327.
- Bourquin J, Schmidli H, van Hoogevest P and Leuenberger H: Basic concepts of artificial neural networks (ANN) modeling in the application to pharmaceutical development. Pharm Dev Technol 1997; 2(2): 95-109.
- Kishan M, Chilukuri K and Sanjay R: Elements of Artificial Neural Networks. In: A Bradford book. MIT press; October 1996.
- Wythoff BJ: Back propagation neural networks: a tutorial, Chemometr Intell Lab Syst 1993; 18: 115-55.
- Simon H: Neural Network: A Comprehensive Foundation 1998. Second edition, Reprint 2009:183-95.
- Kohonen T: Self-organization and Associative Memory, Springer Verlag, Berlin 1988.
- Jadid MN and Fairbairn DR: Predicting moment-curvature parameters from experimental data. Eng Appl Artif Intell 1996; 9: 303-19.
- Carpenter JC and Hoffman ME: Understanding neural network approximations and polynomial approximations helps neural network performance. AI Expert 1995; 10: 31-33.
- Ibrić S, Djuriš J, Parojčić J and Djurić Z: Artificial neural networks in evaluation and optimization of modified release solid dosage forms. Pharmaceutic 2012; 4: 531-50.
- Dowell JA, Hussain A, Devane J and Young D: Artificial neural networks applied to the in-vitro-in-vivo correlation of an extended-release formulation: initial trials and experience. J Phar Sci 1999; 88: 154-60.
- Baptista D, Abreu S, Freitas F, Vasconcelos R and Morgado-Dias F: A survey of software and hardware use in artificial neural networks. Neural Computing and Applications 2013; 23: 591-99.
- Svozil D: Introduction to multi-layer feed-forward neural networks. Chemometrics and Intelligent Laboratory Systems 1997; 39: 43-62.
- Rayzard SM: Learning strategies and automated review acquisition: An Review. Report #926, Department of Computer Science, University of Illinois 1984.
- Bourquin J, Schmidli H, van Hoogevest P and Leuenberger H: Basic concepts of artificial neural networks (ANN) modeling in the application to pharmaceutical development. Pharm Dev Technol 1997; 2: 95-109.
- De Mulder W, Bethard S and Moens MF: A survey on the application of recurrent neural networks to statistical lang modeling. Computer Speech & Lan 2015; 30: 61-98.
- Hassan KM, Pezhman K and Lucia P: Computational intelligence models to predict porosity of tablets using minimum features. Drug Design, Development and Therapy 2017; 11: 193-202.
- Goh WY: Application of RNNs to Prediction of Drug Dissolution Profiles. Neural Computing & Applications 2002; 10: 311-17.
- Eftekhar B: Comparison of artificial neural network and logistic regression models for prediction of mortality in head trauma based on initial clinical data. BMC Medical Informatics and Decision Making 2005; 5(3): 1-8.
- Gölcü M, Sekmen Y, Erduranlı P and Salman MS: Artificial neural-network based modeling of variable valve-timing in a spark-ignition engine. Applied Energy 2005; 81: 187-97.
- Chatterjee SP and Pandya AS: Artificial Neural Networks in Drug Transport Modeling and Simulatione II. Artificial Neural Network for Drug Design, Delivery and Disposition 2015: 243.
- Shah VP, Tsong Y, Sathe P and Liu JP: In-vitro dissolution profile comparison-statistics and analysis of the similarity factor, f2. Pharm Res 1998; 15: 889-96.
- Karuppiah SP: Analytical method development for dissolution release of finished solid oral dosage forms. Int J Curr Pharm Res 2012; 4: 48-53.
- Lourenço FR, Ghisleni DD, Yamamoto RN and Pinto TD: Comparison of dissolution profile of extended-release oral dosage forms-two one-sided equivalence test. Brazilian journal of pharmaceutical sciences 2013; 49: 367-71.
- Yilong Y: Deep learning for in-vitro prediction of pharmaceutical formulations”. Acta Pharmaceutica Sinica B 2019; 9(1): 177-85.
- Marijana M: Optimization and Prediction of Ibuprofen Release from 3D DLP Printlets Using Artificial Neural Networks. Pharmaceutics 2019; 11: 544.
- Zakria Q: Predicting the energy output of hybrid PV–wind renewable energy system using feature selection technique for smart grids. Energy Reports 2021 (Article in Press).
How to cite this article:
Vyas TH and Patel GN: Generalized artificial neural network modelling and its application in performance prediction of sustain release monolithic tablets. Int J Pharm Sci & Res 2021; 12(12): 6530-39. doi: 10.13040/IJPSR.0975-8232.12(12).6530-39.
All © 2021 are reserved by the International Journal of Pharmaceutical Sciences and Research. This Journal licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Article Information
40
6530-6539
767 KB
348
English
IJPSR
Tulsi H. Vyas * and Girish N. Patel
Department of Pharmaceutics, Shree S. K. Patel College of Pharmaceutical Education & Research, Ganpat University, Ganpat Vidyanagar, Mehsana-Gozaria Highway, Mehsana, Gujarat, India.
tulsiupadhyay90@gmail.com
25 January 2021
05 May 2021
28 May 2021
10.13040/IJPSR.0975-8232.12(12).6530-39
01 December 2021