Di zaman sekarang, perusahaan atau institusi yang maju bukan hanya ditunjang oleh berbagai fasilitas yang mewah, mumpuni, dan strategi yang bagus. Softskill dari karyawan pun harus dipertimbangkan, salah satu softskill yang penting yang perlu dimiliki setiap orang adalah Intergrity.

Integrity adalah yaitu satunya keyakinan terhadap nilai-nilai yang dianut dengan pikiran dan perbuatan. Esensi dari integritas adalah kejujuran yang termanifestasi dari sinkronnya keyakinan, pikiran, dan perbuatan di dalam diri sanubari seseorang. (Sumber :The Telkomsel Way Heart Book).

Dengan adanya nilai intergrity di hati setiap orang di dalam perusahaan atau institusi, tentunya secara tidak langsung perusahaan tersebut akan berkembang sumber daya manusianya. Dampaknya perusahaan yang memiliki karyawan yang solid dan jujur akan menghasilkan suatu kekuatan yang bisa membuat perusahaan tersebut menjadi lebih kokoh dan kuat.

Integrity tidak hanya diperlukan oleh seorang pemimpin saja, tapi juga dibutuhkan oleh karyawan. Intergrity juga mengindikasikan beberapa hal, pertama integrity merupakan ciri dari Kepemimpinan, pemimpin yang baik akan menunjukan sikap jujur, merangkul dan memotivasi karyawan/pengikutnya. Kedua integrity bagaikan sebuah pondasi, sebuah perusahaan/institusi tentunya tidak akan bertahan jika pemimpin atau karyawannya tidak menjungjung tinggi nilai kejujuran dan intergritas, perusahaan akan lebih sering mengalami konflik internal yang akan ujung berimbas pada performance perusahaan. Ketiga, yakinlah bahwa intergritas itu baik dan menumbuhkan energi positif, energy positif bisa memicu perbuatan-perbuatan baik lainnya yang tentunya akan membantu perusahaan lebih bersinergis lagi.

Terakhir,Integrity lebih penting dari sekedar profit, intergrity yang baik akan menunjukan kelangsungan hidup dari perusahaan itu sendiri. Perusahaan yang tidak memiliki integritas karyawan, akan menimbulkan efek-efek negatif yang tidak baik untuk perusahaan.

Integrity akan menyadarkan seseorang akan statusnya sebagai bagian dari bangsa dan negara. kesadaran tersebut nantinya akan dapat terwujud dalam pemikiran dan perbuatan diri. saat seseorang telah memiliki integrity yang tinggi maka tidak akan ragu bagi dirinya mengeluarkan segala potensi yang ada dari segenap jiwa dan raga demi kemajuan bangsa dan negara. Pada saat itulah profesionalisme seseorang akan tampak. Profesionalisme demi kemajuan bangsa dan negara menjadi bukti nasionalisme yang dihasilkan dari integrity seseorang.

Banyak contoh tokoh bangsa ini yang memiliki jiwa nasionalisme tinggi yang terwujud dari integrity nya. Sutan syahrir adalah contoh tokoh bangsa yang memiliki integrity yang tinggi. Beliau adalah salah satu tokoh yang berjasa dalam proklamasi kemerdekaan Indonesia. Pada saat mengetahui Jepang menyerah ke sekutu, Sutan Syahrir dan beberapa tokoh pemuda Indonesia lainnya pada saat itu menggebu-gebu mendesak bung karno dan bung hatta untuk segera memproklamasikan kemerdekaan Indonesia. Meskipun pada saat itu bung karno berpendapat bahwa hal tersebut harus didiskusikan terlebih dahulu dengan PPKI namun Sutan syahrir tetap bersikukuh mendesak bung karno memproklamasikan tanpa persetujuan PPKI karena PPKI juga merupakan bentukan Jepang. Integrity tinggi yang dimiliki oleh Sutan Syahrir membuatnya tak gentar dalam berusaha mewujudkan kemerdekaan Indonesia. Bahkan hingga saat ini Sutan Syahrir serta beberapa tokoh pemuda Indonesia lainnya saat ini terus dikenang sebagai pihak yang berjasa bagi kemerdekaan Indonesia. Jiwa nasionalismenya hingga kini terus menjadi cerita yang dikenang oleh seluruh bangsa Indonesia. Integrity yang dimiliki oleh Sutan Syahrir patut dicontoh oleh seluruh rakyat Indonesia saat ini untuk berjuang memajukan Negara ini.

]]>

Simulation Optimization is providing solutions to important practical problems previously beyond reach. This paper explores how new approaches are significantly expanding the power of Simulation Optimization for managing risk. Recent advances in Simulation Optimization technology are leading to new opportunities to solve problems more effectively. Specifically, in applications involving risk and uncertainty, Simulation Optimization surpasses the capabilities of other optimization methods, not only in the quality of solutions, but also in their interpretability and practicality. In this paper, we demonstrate the advantages of using a Simulation Optimization approach to tackle risky decisions, by showcasing the methodology on two popular applications from the areas of finance and business process design.

Whenever uncertainty exists, there is risk. Uncertainty is present when there is a possibility that the outcome of a particular event will deviate from what is expected. In some cases, we can use past experience and other information to try to estimate the probability of occurrence of different events. This allows us to estimate a probability distribution for all possible events. *Risk* can be defined as the probability of occurrence of an event that would have a negative effect on a goal. On the other hand, the probability of occurrence of an event that would have a positive impact is considered an *opportunity* (see Ref. 1 for a detailed discussion of risks and opportunities). Therefore, the portion of the probability distribution that represents potentially harmful, or unwanted, outcomes is the focus of risk management.

Risk management is the process that involves identifying, selecting and implementing measures that can be applied to mitigate risk in a particular situation.^{1} The objective of risk management, in this context, is to find the set of actions (i.e., investments, policies, resource configurations, etc.) to reduce the level of risk to acceptable levels. What constitutes an acceptable level will depend on the situation, the decision makers’ attitude towards risk, and the marginal rewards expected from taking on additional risk. In order to help risk managers achieve this objective, many techniques have been developed, both qualitative and quantitative. Among quantitative techniques, optimization has a natural appeal because it is based on objective mathematical formulations that usually output an optimal solution (i.e. set of decisions) for mitigating risk. However, traditional optimization approaches are prone to serious limitations.

In Section 2 of this paper, we briefly describe two prominent optimization techniques that are frequently used in risk management applications for their ability to handle uncertainty in the data; we then discuss the advantages and disadvantages of these methods. In Section 3, we discuss how Simulation Optimization can overcome the limitations of traditional optimization techniques, and we detail some innovative methods that make this a very useful, practical and intuitive approach for risk management. Section 4 illustrates the advantages of Simulation Optimization on two practical examples. Finally, in Section 5 we summarize our results and conclusions.

** ****Traditional Scenario-based Optimization**

Very few situations in the real world are completely devoid of risk. In fact, a person would be hard-pressed to recall a single decision in their life that was completely risk-free. In the world of deterministic optimization, we often choose to “ignore” uncertainty in order to come up with a unique and objective solution to a problem. But in situations where uncertainty is at the core of the problem – as it is in risk management – a different strategy is required.

In the field of optimization, there are various approaches designed to cope with uncertainty.^{2,3} In this context, the exact values of the parameters (e.g. the data) of the optimization problem are not known with absolute certainty, but may vary to a larger or lesser extent depending on the nature of the factors they represent. In other words, there may be many possible “realizations” of the parameters, each of which is a possible *scenario*.

Traditional scenario-based approaches to optimization, such as *scenario optimization* and *robust optimization*, are effective in finding a solution that is feasible for all the scenarios considered, and minimizing the deviation of the overall solution from the optimal solution for each scenario. These approaches, however, only consider a very small subset of possible scenarios, and the size and complexity of models they can handle are very limited.

*Robust Optimization*

Robust optimization may be used when the parameters of the optimization problem are known only within a finite set of values. The robust optimization framework gets its name because it seeks to identify a robust decision – i.e. a solution that performs well across many possible scenarios.

In order to measure the robustness of a given solution, different criteria may be used. Kouvelis and Yu identify three criteria: (1) Absolute robustness; (2) Robust deviation; and (3) Relative robustness. We illustrate the meaning and relevance of these criteria, by describing their robust optimization approach.

]]>

Problem: simulations almost never produce raw output that is independent and identically distributed (i.i.d.) normal data. Example: Customer waiting times from a queueing system. . .

(1) Are not independent — typically, they are serially correlated. If one customer at the post office waits in line a long time, then the next customer is also likely to wait a long time.

(2) Are not identically distributed. Customers showing up early in the morning might have a much shorter wait than those who show up just before closing time.

(3) Are not normally distributed — they are usually skewed to the right (and are certainly never less than zero).

Thus, it’s difficult to apply “classical” statistical techniques to the analysis of simulation output.Our purpose: Give methods to perform statistical analysis ofoutput from discrete-event computer simulations.

**Types of Simulations**

To facilitate the presentation, we identify two types of simulations with respect to output analysis: Finite-Horizon (Terminating) and Steady-State simulations.

**Finite-Horizon Simulations:** The termination of a finite-horizon simulation takes place at a specific time or is caused by the occurrence of a specific event. Examples are:

- Mass transit system between during rush hour.
- Distribution system over one month.
- Production system until a set of machines breaks down.
- Start-up phase of any system — stationary or nonstationary

**Steady-state simulations:** The purpose of a steady-state simulation is the study of the long-run behavior of a system. A performance measure is called a steady-state parameter if it is a characteristic of the equilibrium distribution of an output stochastic process. Examples are: Continuously operating communication system where the objective is the computation of the mean delay of a packet in the long run. Distribution system over a long period of time.

here I attach the course video of Simulation Output Analysis to provide more information

]]>

Simulation is the imitation of the operation of a real-world process or system over time. The act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors/functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time.

Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist

Key issues in simulation include acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

so from information above, I’ll show you the example of simulation of air plane

]]>

- Verification: concerned with building the
*model right*. It is utilized in the comparison of the conceptual model to the computer representation that implements that conception. - It asks the questions: Is the model implemented correctly in the computer? Are the input parameters and logical structure of the model correctly represented?
- Validation: concerned with building the
*right model*. It is utilized to determine that a model is an accurate representation of the real system. Validation is usually achieved through the calibration of the model, an iterative process of comparing the model to actual system behavior and using the discrepancies between the two, and the insights gained, to improve the model. This process is repeated until model accuracy is judged to be acceptable.

**Verification of Simulation Models**

Many common sense suggestions can be given for use in the verification process.

- Have the code checked by someone other than the programmer.
- Make a flow diagram which includes each logically possible action a system can take when an event occurs, and follow the model logic for each action for each event type.
- Closely examine the model output for reasonableness under a variety of settings of the input parameters. Have the code print out a wide variety of output statistics.
- Have the computerized model print the input parameters at the end of the simulation, to be sure that these parameter values have not been changed inadvertently.
- Make the computer code as self-documenting as possible. Give a precise definition of every variable used, and a general description of the purpose of each major section of code.

These suggestions are basically the same ones any programmer would follow when debugging a computer program.

**Calibration and Validationof Models**

**O****ptimization**

Optimization is the appropiate technology to combine of values for the variable that can be controlled to seek the combination of value that provide the most desirable output from the simulation model.

The way to find the optimal solution is follow some steps:

- Identify all possible decision variables that affect the output of the system
- Based on the possible values of each decision variable, identify all possible solutions
- Evaluate each of these solutions accurately
- Compare each solution fairly
- Record the best answer

Here I provide you the video that represent how to verify and validate the model simulation

]]>

Simulation is often used:

- no suitable theoretical model exists
- the problem is so complex that a theoretical model cannot represent the interrelationships properly.

Simulation is ’the imitative representation of the functioning of one system or process by means of the functioning of another’ - Simulation is “the modeling of a process or system in such a way that the model mimics the response of the actual system to events that take place over time.”
- By studying the behavior of the model, insight about the behavior of the actual system can be gained.
- In practice,

- Simulation is performed using commercial
**simulation software**. - Performance
**statistics**are gathered during the simulation - Modern simulation software provides a realistic,
**graphical animation**of the system being modeled. - During the simulation, the user can
**interactive**ly adjust the animation speed and change model parameter values to do “what-if” analysis on the fly. **State-of-the art**simulation technology provides optimization capability

**Why we choose to simulate?**

- Simulation provides a way to validate whether or not the best decisions are being made.
- Simulation avoid the expensive, time-consuming, and disrupted nature of traditional trial-and-error techniques.
- The power of simulation lies in the fact that it provides a method of analysis that is not only formal and predictive, but is capable of accurately predicting the performance of a system.
- By using a computer to model a system before it is built or to test operating policies before they are actually implemented, many of the pitfalls can be avoided

**When Simulation is Appropriate**

*Not all system problems*that could be solved with the aid of simulation should be solved using simulation,- It is important to
*select the right tool*for the task. *Simulation has certain limitations*of which one should be aware before making a decision to apply it to a given situation.- As a general guideline, simulation is appropriate if:

- An operational (logical or quantitative) decision is being made.
- The process being analyzed is well defined and repetitive.
- Activities and events are interdependent and variable.
- The cost impact of the decision is greater than the cost of doing the simulation.
- The cost of experiment on the actual system is greater than the cost of simulation.

**The process of simulation experime****ntation**

]]>

Part 2

]]>

If analysis of the control chart indicates that the process is currently under control (i.e., is stable, with variation only coming from sources common to the process), then no corrections or changes to process control parameters are needed or desired. In addition, data from the process can be used to predict the future performance of the process. If the chart indicates that the monitored process is not in control, analysis of the chart can help determine the sources of variation, as this will result in degraded process performance.A process that is stable but operating outside of desired limits (e.g., scrap rates may be in statistical control but above desired limits) needs to be improved through a deliberate effort to understand the causes of current performance and fundamentally improve the process.

The control chart is one of the seven basic tools of quality control. Typically control charts are used for time-series data, though they can be used for data that have logical comparability (i.e. you want to compare samples that were taken all at the same time, or the performance of different individuals), however the type of chart used to do this requires consideration.

The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s. The company’s engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to reduce the frequency of failures and repairs. By 1920, the engineers had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Dr. Shewhart’s boss, George Edwards, recalled: “Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it set forth all of the essential principles and considerations which are involved in what we know today as process quality control.”^{[5]} Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes typically produce a “normal distribution curve” (a Gaussian distribution, also commonly referred to as a “bell curve“). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.

In 1924 or 1925, Shewhart’s innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart’s work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander for the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart’s thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.

A control chart consists of:

- Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times [the data]
- The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions)
- A centre line is drawn at the value of the mean of the statistic
- The standard error (e.g., standard deviation/sqrt(n) for the mean) of the statistic is also calculated using all the samples
- Upper and lower control limits (sometimes called “natural process limits”) that indicate the threshold at which the process output is considered statistically ‘unlikely’ and are drawn typically at 3 standard errors from the centre line

The chart may have other optional features, including:

- Add MediaUpper and lower warning limits, drawn as separate lines, typically two standard errors above and below the centre line
- Division into zones, with the addition of rules governing frequencies of observations in each zone

Annotation with events of interest, as determined by the Quality Engineer in charge of the process’s quality

If the process is in control (and the process statistic is normal), 99.7300% of all the points will fall between the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs, a control chart “signaling” the presence of a special-cause requires immediate investigation.

This makes the control limits very important decision aids. The control limits provide information about the process behavior and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the centre line) may not coincide with the specified value (or target) of the quality characteristic because the process’ design simply cannot deliver the process characteristic at the desired level.

Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural centre is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.

The purpose of control charts is to allow simple detection of events that are indicative of actual process change. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.

The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it’s clear that the process is truly in control. Note that with three-sigma limits, common-cause variations result in signals less than once out of every twenty-two points for skewed processes and about once out of every three hundred seventy (1/370.4) points for normally distributed processes. The two-sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points in normally distributed data. (For example, the means of sufficiently large samples drawn from practically any underlying distribution whose variance exists are normally distributed, according to the Central Limit Theorem.)

]]>

The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis. This kind of plot is also called a *scatter chart*, *scattergram*, *scatter diagram* or *scatter graph*.

A scatter plot is used when a variable exists that is under the control of the experimenter. If a parameter exists that is systematically incremented and/or decremented by the other, it is called the *control parameter* or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables.

A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval. For example, weight and height, weight would be on x axis and height would be on the y axis. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the pattern of dots slopes from lower left to upper right, it suggests a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it suggests a negative correlation. A line of best fit (alternatively called ‘trendline’) can be drawn in order to study the correlation between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. No universal best-fit procedure is guaranteed to generate a correct solution for arbitrary relationships. A scatter plot is also very useful when we wish to see how two comparable data sets agree with each other. In this case, an identity line, *i.e.*, a *y*=*x* line, or an 1:1 line, is often drawn as a reference. The more the two data sets agree, the more the scatters tend to concentrate in the vicinity of the identity line; if the two data sets are numerically identical, the scatters fall on the identity line exactly.

One of the most powerful aspects of a scatter plot, however, is its ability to show nonlinear relationships between variables. Furthermore, if the data is represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns.

The scatter diagram is one of the seven basic tools of quality control.

]]>

Defect concentration diagram is used effectively in the following situations:

- During data collection phase of problem identification.
- Analyzing a part or assembly for possible defects.
- Analyzing a product (or a part of a product) being manufactured with several defects.

There are a number of steps that are needed to be follow when constructing the defect concentration diagram:

- Define the fault or faults (or whatever) being investigated.
- Make a map, drawing, or picture.
- Mark on the diagram each time a fault (or whatever) occurs and where it occurs.
- After a sufficient period of time, analyze it to identify where the faults occur.

resources :

Montgomery, Douglas (2005). *Introduction to Statistical Quality Control*. John Wiley & Sons, Inc. ISBN 978-0-471-65631-9

http://en.wikipedia.org/wiki/Defect_concentration_diagram

]]>