Yahoo奇摩 網頁搜尋

  1. 利息計算公式 相關
  1. Compound interest - Wikipedia
    • Compounding Frequency
    • Annual Equivalent Rate
    • Examples
    • Discount Instruments
    • Calculation
    • History
    • See Also

    The compounding frequency is the number of times per year (or rarely, another unit of time) the accumulated interest is paid out, or capitalized (credited to the account), on a regular basis. The frequency could be yearly, half-yearly, quarterly, monthly, weekly, daily, or continuously(or not at all, until maturity). For example, monthly capitalization with interest expressed as an annual rate means that the compounding frequency is 12, with time periods measured in months. The effect of compounding depends on: 1. The nominal interest rate which is applied and 2. The frequency interest is compounded.

    The nominal rate cannot be directly compared between loans with different compounding frequencies. Both the nominal interest rate and the compounding frequency are required in order to compare interest-bearing financial instruments. To help consumers compare retail financial products more fairly and easily, many countries require financial institutions to disclose the annual compound interest rate on deposits or advances on a comparable basis. The interest rate on an annual equivalent basis may be referred to variously in different markets as effective annual percentage rate (EAPR), annual equivalent rate (AER), effective interest rate, effective annual rate, annual percentage yieldand other terms. The effective annual rate is the total accumulated interest that would be payable up to the end of one year, divided by the principal sum. There are usually two aspects to the rules defining these rates: 1. The rate is the annualised compound interest rate, and 2. There may be charges oth...

    1,000 Brazilian real (BRL) is deposited into a Brazilian savings account paying 20% per annum, compounded annually. At the end of one year, 1,000 x 20% = 200 BRL interest is credited to the account...
    A rate of 1% per month is equivalent to a simple annual interest rate (nominal rate) of 12%, but allowing for the effect of compounding, the annual equivalent compound rate is 12.68% per annum (1.0...
    The interest on corporate bonds and government bonds is usually payable twice yearly. The amount of interest paid (each six months) is the disclosed interest rate divided by two and multiplied by t...
    Canadian mortgage loansare generally compounded semi-annually with monthly (or more frequent) payments.

    US and Canadian T-Bills (short term Government debt) have a different convention. Their interest is calculated on a discount basis as (100 − P)/Pbnm,[clarification needed] where P is the price paid...

    Periodic compounding

    The total accumulated value, including the principal sum P {\\displaystyle P} plus compounded interest I {\\displaystyle I} , is given by the formula: 1. P ′ = P ( 1 + r n ) n t {\\displaystyle P'=P\\left(1+{\\frac {r}{n}}\\right)^{nt}} where: 1. Pis the original principal sum 2. P'is the new principal sum 3. r is the nominal annual interest rate 4. nis the compounding frequency 5. t is the overall length of time the interest is applied (expressed using the same time units as r, usually years). The...

    Accumulation function

    Since the principal P is simply a coefficient, it is often dropped for simplicity, and the resulting accumulation functionis used instead. The accumulation function shows what $1 grows to after any length of time. Accumulation functions for simple and compound interest are 1. a ( t ) = 1 + r t {\\displaystyle a(t)=1+rt\\,} 2. a ( t ) = ( 1 + r n ) n t {\\displaystyle a(t)=\\left(1+{\\frac {r}{n}}\\right)^{nt}} If n t = 1 {\\displaystyle nt=1} , then these two functions are the same.

    Continuous compounding

    As n, the number of compounding periods per year, increases without limit, the case is known as continuous compounding, in which case the effective annual rate approaches an upper limit of er − 1, where e is a mathematical constant that is the base of the natural logarithm. Continuous compounding can be thought of as making the compounding period infinitesimally small, achieved by taking the limit as n goes to infinity. See definitions of the exponential function for the mathematical proof of...

    Compound interest was once regarded as the worst kind of usury and was severely condemned by Roman law and the common lawsof many other countries. The Florentine merchant Francesco Balducci Pegolotti provided a table of compound interest in his book Pratica della mercatura of about 1340. It gives the interest on 100 lire, for rates from 1% to 8%, for up to 20 years. The Summa de arithmetica of Luca Pacioli (1494) gives the Rule of 72, stating that to find the number of years for an investment at compound interest to double, one should divide the interest rate into 72. Richard Witt's book Arithmeticall Questions, published in 1613, was a landmark in the history of compound interest. It was wholly devoted to the subject (previously called anatocism), whereas previous writers had usually treated compound interest briefly in just one chapter in a mathematical textbook. Witt's book gave tables based on 10% (the then maximum rate of interest allowable on loans) and on other rates for diff...

  2. Debt-to-equity ratio - Wikipedia
    • Usage
    • Formula
    • Background
    • Example
    • See Also
    • External Links

    Preferred stockcan be considered part of debt or equity. Attributing preferred shares to one or the other is partially a subjective decision but will also take into account the specific features of the preferred shares. When used to calculate a company's financial leverage, the debt usually includes only the Long Term Debt (LTD). Quoted ratios can even exclude the current portion of the LTD. The composition of equity and debt and its influence on the value of the firm is much debated and also described in the Modigliani–Miller theorem. Financial economists and academic papers will usually refer to all liabilities as debt, and the statement that equity plus liabilities equals assets is therefore an accounting identity(it is, by definition, true). Other definitions of debt to equity may not respect this accounting identity, and should be carefully compared.Generally speaking, a high ratio may indicate that the company is much resourced with (outside) borrowing as compared to funding f...

    In a general sense, the ratio is simply debt divided by equity. However, what is classified as debt can differ depending on the interpretation used. Thus, the ratio can take on a number of forms including: 1. Debt / Equity 2. Long-term Debt / Equity 3. Total Liabilities / Equity In a basic sense, Total Debt / Equity is a measure of all of a company's future obligations on the balance sheet relative to equity. However, the ratio can be more discerning as to what is actually a borrowing, as opposed to other types of obligations that might exist on the balance sheet under the liabilities section. For example, often only the liabilities accounts that are actually labelled as "debt" on the balance sheet are used in the numerator, instead of the broader category of "total liabilities". In other words, actual borrowings like bank loans and interest-bearing debt securities are used, as opposed to the broadly inclusive category of total liabilities which, in addition to debt-labelled account...

    On a balance sheet, the formal definition is that debt (liabilities) plus equity equals assets, or any equivalent reformulation. Both the formulas below are therefore identical: 1. A = D + E 2. E = A – D or D = A – E. Debt to equity can also be reformulated in terms of assets or debt: 1. D/E = D /(A – D) = (A – E) / E.

    General Electric Co. () 1. Debt / equity: 4.304 (total debt / stockholder equity) (340/79). Note: This is often presented in percentage form, for instance 430.4. 2. Other equity / shareholder equity: 7.177 (568,303,000/79,180,000) 3. Equity ratio: 12% (shareholder equity / all equity) (79,180,000/647,483,000)

  3. Cohen's kappa - Wikipedia's_kappa
    • History
    • Definition
    • Examples
    • Properties
    • Related Statistics
    • See Also
    • Further Reading
    • External Links

    The first mention of a kappa-like statistic is attributed to Galton (1892);see Smeeton (1985). The seminal paper introducing kappa as a new technique was published by Jacob Cohen in the journal Educational and Psychological Measurementin 1960.

    Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories. The definition of κ {\\textstyle \\kappa } is: 1. κ ≡ p o − p e 1 − p e = 1 − 1 − p o 1 − p e , {\\displaystyle \\kappa \\equiv {\\frac {p_{o}-p_{e}}{1-p_{e}}}=1-{\\frac {1-p_{o}}{1-p_{e}}},\\!} where po is the relative observed agreement among raters, and pe is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly seeing each category. If the raters are in complete agreement then κ = 1 {\\textstyle \\kappa =1} . If there is no agreement among the raters other than what would be expected by chance (as given by pe), κ = 0 {\\textstyle \\kappa =0} . It is possible for the statistic to be negative,which implies that there is no effective agreement between the two raters or the agreement is worse than random. For k categories, N observations to categorize and n k i {\\displaystyle n_{ki}} the number of tim...

    Simple example

    Suppose that you were analyzing data related to a group of 50 people applying for a grant. Each grant proposal was read by two readers and each reader either said "Yes" or "No" to the proposal. Suppose the disagreement count data were as follows, where A and B are readers, data on the main diagonal of the matrix (a and d) count the number of agreements and off-diagonal data (b and c) count the number of disagreements: e.g. The observed proportionate agreement is: 1. p o = a + d a + b + c + d...

    Same percentages but different numbers

    A case sometimes considered to be a problem with Cohen's Kappa occurs when comparing the Kappa calculated for two pairs of raters with the two raters in each pair having the same percentage agreement but one pair give a similar number of ratings in each class while the other pair give a very different number of ratings in each class.(In the cases below, notice B has 70 yeses and 30 nos, in the first case, but those numbers are reversed in the second.) For instance, in the following two cases...

    Hypothesis testing and confidence interval

    P-value for kappa is rarely reported, probably because even relatively low values of kappa can nonetheless be significantly different from zero but not of sufficient magnitude to satisfy investigators.:66Still, its standard error has been describedand is computed by various computer programs. Confidence intervalsfor Kappa may be constructed, for the expected Kappa values if we had infinite number of items checked, using the following formula: 1. C I : κ ± Z 1 − α / 2 S E κ {\\displaystyle CI:\\...

    Interpreting magnitude

    If statistical significance is not a useful guide, what magnitude of kappa reflects adequate agreement? Guidelines would be helpful, but factors other than agreement can influence its magnitude, which makes interpretation of a given magnitude problematic. As Sim and Wright noted, two important factors are prevalence (are the codes equiprobable or do their probabilities vary) and bias (are the marginal probabilities for the two observers similar or different). Other things being equal, kappas...

    Kappa maximum

    Kappa assumes its theoretical maximum value of 1 only when both observers distribute codes the same, that is, when corresponding row and column sums are identical. Anything less is less than perfect agreement. Still, the maximum value kappa could achieve given unequal distributions helps interpret the value of kappa actually obtained. The equation for κmaximum is: 1. κ max = P max − P exp 1 − P exp {\\displaystyle \\kappa _{\\max }={\\frac {P_{\\max }-P_{\\exp }}{1-P_{\\exp }}}} where P exp = ∑ i =...

    Scott's Pi

    A similar statistic, called pi, was proposed by Scott (1955). Cohen's kappa and Scott's pi differ in terms of how peis calculated.

    Fleiss' kappa

    Note that Cohen's kappa measures agreement between two raters only. For a similar measure of agreement (Fleiss' kappa) used when there are more than two raters, see Fleiss (1971). The Fleiss kappa, however, is a multi-rater generalization of Scott's pi statistic, not Cohen's kappa. Kappa is also used to compare performance in machine learning, but the directional version known as Informedness or Youden's J statisticis argued to be more appropriate for supervised learning.

    Weighted kappa

    The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered.:66Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros. Off-diagonal cells contain weights indicating the seriousness of that disagreement. Often, cells one off the diagonal are...

    Banerjee, M.; Capozzoli, Michelle; McSweeney, Laura; Sinha, Debajyoti (1999). "Beyond Kappa: A Review of Interrater Agreement Measures". The Canadian Journal of Statistics. 27 (1): 3–23. doi:10.230...
    Brennan, R. L.; Prediger, D. J. (1981). "Coefficient λ: Some Uses, Misuses, and Alternatives". Educational and Psychological Measurement. 41 (3): 687–699. doi:10.1177/001316448104100307. S2CID 1228...
    Cohen, Jacob (1960). "A coefficient of agreement for nominal scales". Educational and Psychological Measurement. 20 (1): 37–46. doi:10.1177/001316446002000104. hdl:1942/28116. S2CID 15926286.
    Cohen, J. (1968). "Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit". Psychological Bulletin. 70 (4): 213–220. doi:10.1037/h0026256. PMID 19673146.
  4. Symbol rate - Wikipedia
    • Symbols
    • Modulation
    • Significant Condition
    • See Also
    • External Links

    A symbol may be described as either a pulse in digital baseband transmission or a tone in passband transmission using modems. A symbol is a waveform, a state or a significant condition of the communication channel that persists, for a fixed period of time. A sending device places symbols on the channel at a fixed and known symbol rate, and the receiving device has the job of detecting the sequence of symbols in order to reconstruct the transmitted data. There may be a direct correspondence between a symbol and a small unit of data. For example, each symbol may encodeone or several binary digits or 'bits'. The data may also be represented by the transitions between symbols, or even by a sequence of many symbols. The symbol duration time, also known as unit interval, can be directly measured as the time between transitions by looking into an eye diagram of an oscilloscope. The symbol duration time Tscan be calculated as: 1. T s = 1 f s {\\displaystyle T_{s}={1 \\over f_{s}}} where fsis...

    Many data transmission systems operate by the modulation of a carrier signal. For example, in frequency-shift keying (FSK), the frequency of a tone is varied among a small, fixed set of possible values. In a synchronous data transmission system, the tone can only be changed from one frequency to another at regular and well-defined intervals. The presence of one particular frequency during one of these intervals constitutes a symbol. (The concept of symbols does not apply to asynchronous data transmission systems.) In a modulated system, the term modulation ratemay be used synonymously with symbol rate.

    In telecommunication, concerning the modulation of a carrier, a significant condition is one of the signal's parameters chosen to represent information. A significant condition could be an electric current (voltage, or power level), an optical power level, a phase value, or a particular frequency or wavelength. The duration of a significant condition is the time interval between successive significant instants. A change from one significant condition to another is called a signal transition.Information can be transmitted either during the given time interval, or encoded as the presence or absence of a change in the received signal. Significant conditions are recognized by an appropriate device called a receiver, demodulator, or decoder. The decoder translates the actual signal received into its intended logical value such as a binary digit (0 or 1), an alphabetic character, a mark, or a space. Each significant instant is determined when the appropriate device assumes a condition or...

    Gross bit rate, also known as data signaling rate or line rate.
    "On the origins of serial communications and data encoding". Archived from the original on December 5, 2012. Retrieved January 4, 2007.
  5. Haversine formula - Wikipedia
    • Formulation
    • The Law of Haversines
    • See Also
    • Further Reading
    • External Links

    Let the central angle Θbetween any two points on a sphere be: 1. Θ = d r {\\displaystyle \\Theta ={\\frac {d}{r}}} where: 1. d is the distance between the two points along a great circle of the sphere (see spherical distance), 2. ris the radius of the sphere. The haversine formula allows the haversine of Θ (that is, hav(Θ)) to be computed directly from the latitude (represented by φ) and longitude (represented by λ) of the two points: 1. hav ⁡ ( Θ ) = hav ⁡ ( φ 2 − φ 1 ) + cos ⁡ ( φ 1 ) cos ⁡ ( φ 2 ) hav ⁡ ( λ 2 − λ 1 ) {\\displaystyle \\operatorname {hav} \\left(\\Theta \\right)=\\operatorname {hav} \\left(\\varphi _{2}-\\varphi _{1}\\right)+\\cos \\left(\\varphi _{1}\\right)\\cos \\left(\\varphi _{2}\\right)\\operatorname {hav} \\left(\\lambda _{2}-\\lambda _{1}\\right)} where 1. φ1, φ2are the latitude of point 1 and latitude of point 2 (in radians), 2. λ1, λ2are the longitude of point 1 and longitude of point 2 (in radians). Finally, the haversine function hav(Θ), applied above to both the central angle Θ...

    Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points u, v, and w on the sphere. If the lengths of these three sides are a (from u to v), b (from u to w), and c (from v to w), and the angle of the corner opposite c is C, then the law of haversines states: 1. hav ⁡ ( c ) = hav ⁡ ( a − b ) + sin ⁡ ( a ) sin ⁡ ( b ) hav ⁡ ( C ) . {\\displaystyle \\operatorname {hav} (c)=\\operatorname {hav} (a-b)+\\sin(a)\\sin(b)\\operatorname {hav} (C).} Since this is a unit sphere, the lengths a, b, and c are simply equal to the angles (in radians) subtended by those sides from the center of the sphere (for a non-unit sphere, each of these arc lengths is equal to its central angle multiplied by the radius Rof the sphere). In order to obtain the haversine formula of the previous section from this law, one simply considers the special case where u is the north pole, while v and w are the two points whose separation d is to be determined. In tha...

    U. S. Census Bureau Geographic Information Systems FAQ, (content has been moved to What is the best way to calculate the distance between 2 points?)
    R. W. Sinnott, "Virtues of the Haversine", Sky and Telescope 68(2), 159 (1984).
    Romuald Ireneus 'Scibor-Marchocki, Spherical trigonometry, Elementary-Geometry Trigonometryweb page (1997).
    Implementations of the haversine formula in 91 languages at and in 17 languages on
    Other implementations in C++, C (MacOS), Pascal, Python, Ruby, JavaScript, PHP,Matlab, MySQL
  6. Mean absolute percentage error - Wikipedia
    • Mape in Regression Problems
    • Alternative Mape Definitions
    • Issues
    • See Also
    • External Links

    Mean absolute percentage error is commonly used as a loss function for regression problemsand in model evaluation, because of its very intuitive interpretation in terms of relative error.

    Problems can occur when calculating the MAPE value with a series of small denominators. A singularity problem of the form 'one divided by zero' and/or the creation of very large changes in the Absolute Percentage Error, caused by a small deviation in error, can occur. As an alternative, each actual value (At) of the series in the original formula can be replaced by the average of all actual values (Āt) of that series. This alternative is still being used for measuring the performance of models that forecast spot electricity prices. Note that this is equivalent to dividing the sum of absolute differences by the sum of actual values, and is sometimes referred to as WAPE (weighted absolute percentage error) or wMAPE (weighted mean absolute percentage error).

    Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application,and there are many studies on shortcomings and misleading results from MAPE. 1. It cannot be used if there are zero values (which sometimes happens for example in demand data) because there would be a division by zero. 2. For forecasts which are too low the percentage error cannot exceed 100%, but for forecasts which are too high there is no upper limit to the percentage error. 3. MAPE puts a heavier penalty on negative errors, A t < F t {\\displaystyle A_{t}

  7. A-weighting - Wikipedia
    • History
    • Deficiencies
    • B-, C-, D-, G- and Z-Weightings
    • Environmental and Other Noise Measurements
    • Audio Reproduction and Broadcasting Equipment
    • Function Realisation of Some Common Weightings
    • Transfer Function Equivalent
    • See Also
    • Further Reading
    • External Links

    A-weighting began with work by Fletcher and Munson which resulted in their publication, in 1933, of a set of equal-loudness contours. Three years later these curves were used in the first American standard for sound level meters. This ANSI standard, later revised as ANSI S1.4-1981, incorporated B-weighting as well as the A-weighting curve, recognising the unsuitability of the latter for anything other than low-level measurements. But B-weighting has since fallen into disuse. Later work, first by Zwicker and then by Schomer, attempted to overcome the difficulty posed by different levels, and work by the BBC resulted in the CCIR-468 weighting, currently maintained as ITU-R 468 noise weighting, which gives more representative readings on noise as opposed to pure tones.[citation needed]

    A-weighting is valid to represent the sensitivity of the human ear as a function of the frequency of pure tones, but only for relatively quiet levels of sound. In effect, the A-weighting is based on the 40-phon Fletcher–Munson curves which represented an early determination of the equal-loudness contourfor human hearing. However, because decades of field experience have shown a very good correlation between the A scale and occupational deafness in the frequency range of human speech, this scale is employed in many jurisdictions to evaluate the risks of occupational deafness and other auditory problems related to signals or speech intelligibility in noisy environments. Because of perceived discrepancies between early and more recent determinations, the International Organization for Standardization(ISO) recently revised its standard curves as defined in ISO 226, in response to the recommendations of a study coordinated by the Research Institute of Electrical Communication, Tohoku Uni...

    A-frequency-weighting is mandated by the international standard IEC 61672 to be fitted to all sound level meters and are approximations to the equal loudness contours given in ISO 226. The old B- and D-frequency-weightings have fallen into disuse, but many sound level meters provide for C frequency-weighting and its fitting is mandated — at least for testing purposes — to precision (Class one) sound level meters. D-frequency-weighting was specifically designed for use when measuring high level aircraft noise in accordance with the IEC 537 measurement standard. The large peak in the D-weighting curve is not a feature of the equal-loudness contours, but reflects the fact that humans hear random noise differently from pure tones, an effect that is particularly pronounced around 6 kHz. This is because individual neurons from different regions of the cochlea in the inner ear respond to narrow bands of frequencies, but the higher frequency neurons integrate a wider band and hence signal a...

    A-weighted decibels are abbreviated dB(A) or dBA. When acoustic (calibrated microphone) measurements are being referred to, then the units used will be dB SPL referenced to20 micropascals = 0 dB SPL.[nb 1] The A-weighting curve has been widely adopted for environmental noise measurement, and is standard in many sound level meters. The A-weighting system is used in any measurement of environmental noise (examples of which include roadway noise, rail noise, aircraft noise). A-weighting is also in common use for assessing potential hearing damagecaused by loud noise. A-weighted SPL measurements of noise level are increasingly found on sales literature for domestic appliances such as refrigerators, freezers and computer fans. In Europe, the A-weighted noise level is used for instance for normalizing the noise of tires on cars. The A-weighting is also used for noise dose measurements at work. A noise level of more than 85 dB(A) each day increases the risk factor for hearing damage. Noise...

    Although the A-weighting curve, in widespread use for noise measurement, is said to have been based on the 40-phon Fletcher-Munson curve, research in the 1960s demonstrated that determinations of equal-loudness made using pure tones are not directly relevant to our perception of noise. This is because the cochlea in our inner ear analyses sounds in terms of spectral content, each 'hair-cell' responding to a narrow band of frequencies known as a critical band.[citation needed] The high-frequency bands are wider in absolute terms than the low frequency bands, and therefore 'collect' proportionately more power from a noise source.[citation needed] However, when more than one critical band is stimulated, the outputs of the various bands are summed by the brainto produce an impression of loudness. For these reasons equal-loudness curves derived using noise bands show an upwards tilt above 1 kHz and a downward tilt below 1 kHz when compared to the curves derived using pure tones. This enh...

    The standard defines weightings (A ( f ) , C ( f ) {\\displaystyle A(f),C(f)} ) in dB units by tables with tolerance limits (to allow a variety of implementations). Additionally, the standard describes weighting functions R X ( f ) {\\displaystyle R_{X}(f)} to calculate the weightings. The weighting function R X ( f ) {\\displaystyle R_{X}(f)} is applied to the amplitude spectrum (not the intensity spectrum) of the unweighted sound level. The offsets ensure the normalisation to 0 dB at 1000 Hz. Appropriate weighting functions are:

    The gain curves can be realised by the following s-domain transfer functions. They are not defined in this way though, being defined by tables of values with tolerances in the standards documents, thus allowing different realisations:[citation needed]

    Audio Engineer's Reference Book, 2nd Ed 1999, edited Michael Talbot Smith, Focal Press
    An Introduction to the Psychology of Hearing5th ed, Brian C. J. Moore, Elsevier Press
  8. Signal-to-noise ratio - Wikipedia
    • Definition
    • Alternative Definition
    • Modulation System Measurements
    • Noise Reduction
    • Digital Signals
    • Optical Signals
    • Types and Abbreviations
    • Other Uses
    • External Links

    Signal-to-noise ratio is defined as the ratio of the power of a signal (meaningful input) to the power of background noise(meaningless or unwanted input): 1. S N R = P s i g n a l P n o i s e , {\\displaystyle \\mathrm {SNR} ={\\frac {P_{\\mathrm {signal} }}{P_{\\mathrm {noise} }}},} where P is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. Depending on whether the signal is a constant (s) or a random variable (S), the signal-to-noise ratio for random noise Nbecomes: 1. S N R = s 2 E [ N 2 ] {\\displaystyle \\mathrm {SNR} ={\\frac {s^{2}}{\\mathrm {E} [N^{2}]}}} where E refers to the expected value, i.e. in this case the mean square of N,or 1. S N R = E [ S 2 ] E [ N 2 ] {\\displaystyle \\mathrm {SNR} ={\\frac {\\mathrm {E} [S^{2}]}{\\mathrm {E} [N^{2}]}}} If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation σN. The signal and the...

    An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviationof a signal or measurement: 1. S N R = μ σ {\\displaystyle \\mathrm {SNR} ={\\frac {\\mu }{\\sigma }}} where μ {\\displaystyle \\mu } is the signal mean or expected value and σ {\\displaystyle \\sigma } is the standard deviation of the noise, or an estimate thereof.[note 2] Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance), and it is only an approximation since E ⁡ [ X 2 ] = σ 2 + μ 2 {\\displaystyle \\operatorname {E} \\left[X^{2}\\right]=\\sigma ^{2}+\\mu ^{2}} . It is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviationof the pixel values over a given neighborhood. Sometimes[further explanation needed] SNR is defined as the square of the alternative definition above, in which case it is...

    Amplitude modulation

    Channel signal-to-noise ratio is given by 1. ( S N R ) C , A M = A C 2 ( 1 + k a 2 P ) 2 W N 0 {\\displaystyle \\mathrm {(SNR)_{C,AM}} ={\\frac {A_{C}^{2}(1+k_{a}^{2}P)}{2WN_{0}}}} where W is the bandwidth and k a {\\displaystyle k_{a}} is modulation index Output signal-to-noise ratio (of AM receiver) is given by 1. ( S N R ) O , A M = A c 2 k a 2 P 2 W N 0 {\\displaystyle \\mathrm {(SNR)_{O,AM}} ={\\frac {A_{c}^{2}k_{a}^{2}P}{2WN_{0}}}}

    Frequency modulation

    Channel signal-to-noise ratio is given by 1. ( S N R ) C , F M = A c 2 2 W N 0 {\\displaystyle \\mathrm {(SNR)_{C,FM}} ={\\frac {A_{c}^{2}}{2WN_{0}}}} Output signal-to-noise ratio is given by 1. ( S N R ) O , F M = A c 2 k f 2 P 2 N 0 W 3 {\\displaystyle \\mathrm {(SNR)_{O,FM}} ={\\frac {A_{c}^{2}k_{f}^{2}P}{2N_{0}W^{3}}}}

    All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, the gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Internal electronic noise of measurement systems can be reduced through the use of low-noise amplifiers. When the characteristics of the noise are known and are different from the signal, it is possible to use a filter to reduce the noise. For example, a lock-in amplifiercan extract a narrow bandwidth signal from broadband noise a million times stronger. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averagingthe measurements. In this case the noise goes down as the square root of the number of averaged samples.

    When a measurement is digitized, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise"). This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. Although noise levels in a digital system can be expressed using SNR, it is more...

    Optical signals have a carrier frequency that is much higher than the modulation frequency (about 200 THz and more). This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer.

    Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio.

    While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. SNR is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as noise that interferes with the signalof appropriate discussion.

    Walt Kester, Taking the Mystery out of the Infamous Formula,"SNR = 6.02N + 1.76dB," and Why You Should Care (PDF), Analog Devices, retrieved 2019-04-10
    ADC and DAC Glossary – Maxim Integrated Products
    Understand SINAD, ENOB, SNR, THD, THD + N, and SFDR so you don't get lost in the noise floor – Analog Devices
  9. Free-space path loss - Wikipedia
    • Free-Space Path Loss Formula
    • Influence of Distance and Frequency
    • Derivation
    • Free-Space Path Loss in Decibels
    • See Also
    • Further Reading

    The free-space path loss (FSPL) formula derives from the Friis transmission formula. This states that in a radio system consisting of a transmitting antenna transmitting radio waves to a receiving antenna, the ratio of radio wave power received P r {\\displaystyle P_{r}} to the power transmitted P t {\\displaystyle P_{t}} is: 1. P r P t = D t D r ( λ 4 π d ) 2 {\\displaystyle {\\frac {P_{r}}{P_{t}}}=D_{t}D_{r}\\left({\\frac {\\lambda }{4\\pi d}}\\right)^{2}} where 1. D t {\\displaystyle \\ D_{t}} is the directivityof the transmitting antenna 2. D r {\\displaystyle \\ D_{r}} is the directivityof the receiving antenna 3. λ {\\displaystyle \\ \\lambda } is the signal wavelength 4. d {\\displaystyle \\ d} is the distance between the antennas The distance between the antennas d {\\displaystyle d} must be large enough that the antennas are in the far field of each other d ≫ λ {\\displaystyle \\ d\\gg \\lambda } .The free-space path loss is the loss factor in this equation that is due to distance and wavelength,...

    The free-space loss increases with the distance between the antennas and decreases with the wavelength of the radio waves due to these factors: 1. Intensity (I {\\displaystyle I} ) – the power density of the radio waves decreases with the square of distance from the transmitting antenna due to spreading of the electromagnetic energy in space according to the inverse square law 2. Antenna capture area (A eff {\\displaystyle A_{\\text{eff}}} ) – the amount of power the receiving antenna captures from the radiation field is proportional to a factor called the antenna aperture or antenna capture area, which increases with the square of wavelength.Since this factor is not related to the radio wave path but comes from the receiving antenna, the term "free-space path loss" is a little misleading.

    The radio waves from the transmitting antenna spread out in a spherical wavefront. The amount of power passing through any sphere centered on the transmitting antenna is equal. The surface area of a sphere of radius d {\\displaystyle d} is 4 π d 2 {\\displaystyle 4\\pi d^{2}} . Thus the intensity or power density of the radiation in any particular direction from the antenna is inversely proportional to the square of distance 1. I ∝ P t 4 π d 2 {\\displaystyle I\\propto {P_{t} \\over 4\\pi d^{2}}} For an isotropic antennawhich radiates equal power in all directions, the power density is evenly distributed over the surface of a sphere centered on the antenna 1. I = P t 4 π d 2 (1) {\\displaystyle I={P_{t} \\over 4\\pi d^{2}}\\qquad \\qquad \\qquad {\\text{(1)}}} The amount of power the receiving antenna receives from this radiation field is 1. P r = A eff I (2) {\\displaystyle P_{r}=A_{\\text{eff}}I\\qquad \\qquad \\qquad {\\text{(2)}}} The factor A eff {\\displaystyle A_{\\text{eff}}} , called the effecti...

    A convenient way to express FSPL is in terms of decibels(dB) 1. FSPL ⁡ ( dB ) = 10 log 10 ⁡ ( ( 4 π d f c ) 2 ) = 20 log 10 ⁡ ( 4 π d f c ) = 20 log 10 ⁡ ( d ) + 20 log 10 ⁡ ( f ) + 20 log 10 ⁡ ( 4 π c ) = 20 log 10 ⁡ ( d ) + 20 log 10 ⁡ ( f ) − 147.55 , {\\displaystyle {\\begin{aligned}\\operatorname {FSPL} ({\\text{dB}})&=10\\log _{10}\\left(\\left({\\frac {4\\pi df}{c}}\\right)^{2}\\right)\\\\&=20\\log _{10}\\left({\\frac {4\\pi df}{c}}\\right)\\\\&=20\\log _{10}(d)+20\\log _{10}(f)+20\\log _{10}\\left({\\frac {4\\pi }{c}}\\right)\\\\&=20\\log _{10}(d)+20\\log _{10}(f)-147.55,\\end{aligned}}} where the units are as before. For typical radio applications, it is common to find f {\\displaystyle \\ f} measured in units of GHz and d {\\displaystyle \\ d} in km, in which case the FSPL equation becomes 1. FSPL ⁡ ( dB ) = 20 log 10 ⁡ ( d ) + 20 log 10 ⁡ ( f ) + 92.45 {\\displaystyle \\operatorname {FSPL} ({\\text{dB}})=20\\log _{10}(d)+20\\log _{10}(f)+92.45} For d , f {\\displaystyle \\ d,f} in meters and kilohertz respectively...

  10. Centroid - Wikipedia
    • History
    • Properties
    • Examples
    • Locating
    • See Also
    • References
    • External Links

    The term "centroid" is of recent coinage (1814).[citation needed] It is used as a substitute for the older terms "center of gravity," and "center of mass", when the purely geometrical aspects of that point are to be emphasized. The term is peculiar to the English language. The French use "centre de gravité" on most occasions, and others use terms of similar meaning. The center of gravity, as the name indicates, is a notion that arose in mechanics, most likely in connection with building activities. When, where, and by whom it was invented is not known, as it is a concept that likely occurred to many people individually with minor differences. While Archimedes does not state that proposition explicitly, he makes indirect references to it, suggesting he was familiar with it. However, Jean-Étienne Montucla(1725–1799), the author of the first history of mathematics (1758), declares categorically (vol. I, p. 463) that the center of gravity of solids is a subject Archimedes did not touch....

    The geometric centroid of a convex object always lies in the object. A non-convex object might have a centroid that is outside the figure itself. The centroid of a ring or a bowl, for example, lies in the object's central void. If the centroid is defined, it is a fixed point of all isometries in its symmetry group. In particular, the geometric centroid of an object lies in the intersection of all its hyperplanes of symmetry. The centroid of many figures (regular polygon, regular polyhedron, cylinder, rectangle, rhombus, circle, sphere, ellipse, ellipsoid, superellipse, superellipsoid, etc.) can be determined by this principle alone. In particular, the centroid of a parallelogram is the meeting point of its two diagonals. This is not true of other quadrilaterals. For the same reason, the centroid of an object with translational symmetryis undefined (or lies outside the enclosing space), because a translation has no fixed point.

    The centroid of a triangle is the intersection of the three mediansof the triangle (each median connecting a vertex with the midpoint of the opposite side). For other properties of a triangle's centroid, see below.

    Plumb line method

    The centroid of a uniformly dense planar lamina, such as in figure (a) below, may be determined experimentally by using a plumblineand a pin to find the collocated center of mass of a thin body of uniform density having the same shape. The body is held by the pin, inserted at a point, off the presumed centroid in such a way that it can freely rotate around the pin; the plumb line is then dropped from the pin (figure b). The position of the plumbline is traced on the surface, and the procedure...

    Balancing method

    For convex two-dimensional shapes, the centroid can be found by balancing the shape on a smaller shape, such as the top of a narrow cylinder. The centroid occurs somewhere within the range of contact between the two shapes (and exactly at the point where the shape would balance on a pin). In principle, progressively narrower cylinders can be used to find the centroid to arbitrary precision. In practice air currents make this infeasible. However, by marking the overlap range from multiple bala...

    Of a finite set of points

    The centroid of a finite set of k {\\displaystyle k} points x 1 , x 2 , … , x k {\\displaystyle \\mathbf {x} _{1},\\mathbf {x} _{2},\\ldots ,\\mathbf {x} _{k}} in R n {\\displaystyle \\mathbb {R} ^{n}} is 1. C = x 1 + x 2 + ⋯ + x k k {\\displaystyle \\mathbf {C} ={\\frac {\\mathbf {x} _{1}+\\mathbf {x} _{2}+\\cdots +\\mathbf {x} _{k}}{k}}} . This point minimizes the sum of squared Euclidean distances between itself and each point in the set.

    Altshiller-Court, Nathan (1925), College Geometry: An Introduction to the Modern Geometry of the Triangle and the Circle (2nd ed.), New York: Barnes & Noble, LCCN 52013504
    Bourke, Paul (July 1997). "Calculating the area and centroid of a polygon".
    Johnson, Roger A. (2007), Advanced Euclidean Geometry, Dover
    Kay, David C. (1969), College Geometry, New York: Holt, Rinehart and Winston, LCCN 69012075
    Characteristic Property of Centroid at cut-the-knot
    Barycentric Coordinates at cut-the-knot
    Interactive animations showing Centroid of a triangle and Centroid construction with compass and straightedge
  11. 其他人也搜尋了
  1. 利息計算公式 相關