Logarithm
In mathematics, the logarithm is the inverse function to exponentiation. This means that the logarithm of a number x to the base b is the exponent to which b must be raised, to produce x. For example, the logarithm base 10 of 1000 is 3, or log10(1000) = 3, because 1000 = 103. The logarithm of x to base b is denoted as logb(x), or without parentheses, logb x, or even without the explicit base, log x, when no confusion is possible.
John Napier introduced logarithms in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is because the logarithm of a product is the sum of the logarithms of the factors.
The logarithm base 10 is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number e ≈ 2.718 as its base; its use is widespread in mathematics and physics, because of its very simple derivative. The binary logarithm uses base 2 and is frequently used in computer science.
Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help to describe frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting.
The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the complex logarithm is the multi-valued inverse of the complex exponential function. Similarly, the discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in public-key cryptography.
In addition to the uses mentioned above, logarithms have many other applications in mathematics and science. For example, in calculus, logarithmic functions are used to model many phenomena such as population growth, radioactive decay, and more. In engineering and physics, logarithms are used to model systems that exhibit exponential behavior, such as electrical circuits, sound waves, and heat transfer.
One of the key properties of logarithms is that they can be used to convert multiplication and division into addition and subtraction, respectively. This is known as the logarithmic identity and states that logb(xy) = logb(x) + logb(y) and logb(x/y) = logb(x) – logb(y). This property greatly simplifies calculations, especially when working with large numbers.
Half-life
Half-life, represented by the symbol t½, is the amount of time required for a quantity (of substance) to reduce to half of its initial value. The concept is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable atoms survive. The term is also used more generally to characterize any type of exponential decay, such as the biological half-life of drugs and other chemicals in the human body. The inverse of half-life, in exponential growth, is known as doubling time.
The term “half-life period” was first used by Ernest Rutherford in 1907 and was later shortened to “half-life” in the early 1950s. Rutherford discovered the principle while studying the decay period of radium to lead-206 in order to determine the age of rocks.
One of the key properties of half-life is that it is constant over the lifetime of an exponentially decaying quantity and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed.
It is important to note that the concept of half-life is probabilistic in nature. For example, if there is only one radioactive atom and its half-life is one second, there will not be exactly “half of an atom” left after one second. Instead, half-life is defined in terms of probability, stating that the probability of a radioactive atom decaying within its half-life is 50%.
Various formulas can be used to describe half-life in exponential decay, including N(t) = N0(1/2)^(t/t1/2), N(t) = N0e^(-t/τ), and N(t) = N0e^(-λt). These formulas show the relationship between the initial quantity of substance, the remaining quantity that has not yet decayed, and the half-life, mean lifetime, and decay constant of the substance.
Limit of a function
In mathematics, the limit of a function is a fundamental concept in calculus and analysis that describes the behavior of a function near a particular input value. It is a tool used to understand how a function behaves as its input values get closer and closer to a specific value.
A function, denoted as f(x), assigns an output value to a given input value x. We say that the function has a limit L at an input value p, if as x gets closer and closer to p, the output value f(x) also gets closer and closer to L. More specifically, when f(x) is applied to any input value that is sufficiently close to p, the output value will be forced arbitrarily close to L. However, if some input values that are very close to p are taken to outputs that stay a fixed distance apart, then we say the limit does not exist.
The concept of limit was first formally defined in the early 19th century by mathematicians such as Bolzano and Cauchy, who used the epsilon-delta technique to define continuous functions. The modern notation for limits, such as the arrow below the limit symbol, was introduced by Hardy in 1908.
The concept of limit has many applications in modern calculus. It is used in the definitions of continuity, where a function is considered continuous if all of its limits agree with the values of the function. Additionally, the concept of limit also appears in the definition of the derivative, which is the limiting value of the slope of secant lines to the graph of a function.