COMMENTS

  1. Teaching Networks to Solve Optimization Problems

    often in the form of a recurrent neural network, that receives the current loss (or its gradient) as input and outputs the parameter updates [15, 22, 3, 33, 8]. Such methods are effective in optimizing a wide range of optimization problems by reducing the number of iterations and often achieve better solutions for non-convex optimization problems.

  2. Artificial neural networks used in optimization problems

    This work proposes the use of artificial neural networks to approximate the objective function in optimization problems to make it possible to apply other techniques to resolve the problem. The objective function is approximated by a non-linear regression that can be used to resolve an optimization problem. ... The proposal to solve an ...

  3. Artificial Neural Networks Based Optimization Techniques: A Review

    In the last few years, intensive research has been done to enhance artificial intelligence (AI) using optimization techniques. In this paper, we present an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), and ...

  4. Solving real-world optimization tasks using physics-informed neural

    Figure 1. Neural network architecture using physics-informed loss to solve the optimization task. ( a) The domain variables (ex. time or position) as neural network inputs. ( b) The target ...

  5. Solving Combinatorial Optimization Problems with Deep Neural Network: A

    Combinatorial Optimization Problems (COPs) are a class of optimization problems that are commonly encountered in industrial production and everyday life. Over the last few decades, traditional algorithms, such as exact algorithms, approximate algorithms, and heuristic algorithms, have been proposed to solve COPs. However, as COPs in the real world become more complex, traditional algorithms ...

  6. Optimization for Deep Learning: An Overview

    Optimization is a critical component in deep learning. We think optimization for neural networks is an interesting topic for theoretical research due to various reasons. First, its tractability despite non-convexity is an intriguing question and may greatly expand our understanding of tractable problems. Second, classical optimization theory is far from enough to explain many phenomena ...

  7. Neural Network Optimization

    Saddle point — simultaneously a local minimum and a local maximum. An example function that is often used for testing the performance of optimization algorithms on saddle points is the Rosenbrook function.The function is described by the formula: f(x,y) = (a-x)² + b(y-x²)², which has a global minimum at (x,y) = (a,a²). This is a non-convex function with a global minimum located within a ...

  8. Combinatorial optimization with physics-inspired graph neural networks

    Here we demonstrate how graph neural networks can be used to solve combinatorial optimization problems. Our approach is broadly applicable to canonical NP-hard problems in the form of quadratic ...

  9. Deep Optimisation: Solving Combinatorial Optimisation Problems using

    Deep Optimisation (DO) combines evolutionary search with Deep Neural Networks (DNNs) in a novel way - not for optimising a learning algorithm, but for finding a solution to an optimisation problem. Deep learning has been successfully applied to classification, regression, decision and generative tasks and in this paper we extend its application to solving optimisation problems. Model Building ...

  10. Deep Optimisation: Solving Combinatorial Optimisation Problems using

    DO is the first algorithm to use a deep multi-layered feed-forward neural network to solve CO problems within the framework of MBOAs. The focus of this paper is to introduce the concept of DO to show how DNNs can be extended to the field of MBOAs. By making this connection we open the opportu-nity to use the advanced DL tools that have a well ...

  11. Optimization of Neural Networks with Linear Solvers

    Now, that's it! We have defined our optimization problem. We can now solve it using a solver. In this case, we use the open-source solver GLPK (a linear solver), since our problem is fully linear: # Solve the model solver = pyo.SolverFactory('glpk') results = solver.solve(objmodel)

  12. Three ways to solve partial differential equations with neural networks

    Neural networks are increasingly used to construct numerical solution methods for partial differential equations. In this expository review, we introduce and contrast three important recent approaches attractive in their simplicity and their suitability for high-dimensional problems: physics-informed neural networks, methods based on the Feynman-Kac formula and methods based on the solution ...

  13. Solving classic unsupervised learning problems with deep neural networks

    We trained a deep neural network to solve the SFA optimization problem for videos of rotating objects. This can be done simply by training the network using (1) as the loss function, but the constraints need to be enforced via network architecture. ... (MINEs) introduces a loss function for a neural network M to estimate a tight lower bound of ...

  14. Using Neural Networks For Solving Optimization Problems

    This is already a problem when benchmarking, but gets worse if you want to train things like a neural net. There is currently a competition ongoing (ML4CO, see link below) that encourages to solve optimization problems using ML/AI, the results of which might give some insight into how ML can be used in MIP solvers:

  15. Solving Nonlinear Equality Constrained Multiobjective Optimization

    This paper develops a neural network architecture and a new processing method for solving in real time, the nonlinear equality constrained multiobjective optimization problem (NECMOP), where several nonlinear objective functions must be optimized in a conflicting situation. In this processing method, the NECMOP is converted to an equivalent scalar optimization problem (SOP). The SOP is then ...

  16. Solving nonlinear optimization problems using networks of spiking

    Most artificial neural networks used in practical applications are based on simple neuron types in a multi-layer architecture. Here, we propose to solve optimization problems using a fully recurrent network of spiking neurons mimicking the response behavior of biological neurons. Such networks can compute a series of different solutions for a given problem and converge into a periodical ...

  17. Solving degenerate optimization problems using networks of neural

    Optimization problems containing multiple, identical solutions, in particular the travelling salesman problem (TSP), are solved using networks of neural oscillators. Processing units are described by two state variables representing fast and slow membrane events, much like the Fitzhugh-Nagumo model neurons, and are passive oscillators in that ...

  18. Learning to solve graph metric dimension problem based on ...

    Deep learning has been widely used to solve graph and combinatorial optimization problems. However, proper model deployment is critical for training a model and solving all problems. Existing frameworks mainly use reinforcement learning to learn to solve combinatorial optimization problems, in which a partial solution of the problem is regarded as an environmental state and each vertex of the ...

  19. Solving convex optimization problems using recurrent neural networks in

    A recurrent neural network is proposed to deal with the convex optimization problem. By employing a specific nonlinear unit, the proposed neural network is proved to be convergent to the optimal solution in finite time, which increases the computation efficiency dramatically. Compared with most of existing stability conditions, i.e., asymptotical stability and exponential stability, the ...

  20. Solving Convex Multi-Objective Optimization Problems via a Capable

    By using the weighted sum method, a single-objective optimization problem related to the CMPP is formulated, in which the scalar objective is summation of weighted original objective functions. The Pareto Optimal Solution (POS) is given by using different values of weights. A neural network framework is then designed for solving the obtained ...

  21. On solving constrained optimization problems with neural networks: a

    Deals with the use of neural networks to solve linear and nonlinear programming problems. The dynamics of these networks are analyzed. In particular, the dynamics of the canonical nonlinear programming circuit are analyzed. The circuit is shown to be a gradient system that seeks to minimize an unconstrained energy function that can be viewed as a penalty method approximation of the original ...

  22. Adversarial neural network methods for topology optimization of

    This research presents a novel method using an adversarial neural network to solve the eigenvalue topology optimization problems. The study focuses on optimizing the first eigenvalues of second-order elliptic and fourth-order biharmonic operators subject to geometry constraints. These models are usually solved with topology optimization ...