hdu 5.2.7 1053 Entropy

2023-10-18 15:38
文章标签 hdu entropy 5.2 1053

本文主要是介绍hdu 5.2.7 1053 Entropy,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

An entropy encoder is a data encoding method that achieves lossless data compression by encoding a message with “wasted” or “extra” information removed. In other words, entropy encoding removes information that was not necessary in the first place to accurately encode the message. A high degree of entropy implies a message with a great deal of wasted information; english text encoded in ASCII is an example of a message type that has very high entropy. Already compressed messages, such as JPEG graphics or ZIP archives, have very little entropy and do not benefit from further attempts at entropy encoding.

English text encoded in ASCII has a high degree of entropy because all characters are encoded using the same number of bits, eight. It is a known fact that the letters E, L, N, R, S and T occur at a considerably higher frequency than do most other letters in english text. If a way could be found to encode just these letters with four bits, then the new encoding would be smaller, would contain all the original information, and would have less entropy. ASCII uses a fixed number of bits for a reason, however: it’s easy, since one is always dealing with a fixed number of bits to represent each possible glyph or character. How would an encoding scheme that used four bits for the above letters be able to distinguish between the four-bit codes and eight-bit codes? This seemingly difficult problem is solved using what is known as a “prefix-free variable-length” encoding.

In such an encoding, any number of bits can be used to represent any glyph, and glyphs not present in the message are simply not encoded. However, in order to be able to recover the information, no bit pattern that encodes a glyph is allowed to be the prefix of any other encoding bit pattern. This allows the encoded bitstream to be read bit by bit, and whenever a set of bits is encountered that represents a glyph, that glyph can be decoded. If the prefix-free constraint was not enforced, then such a decoding would be impossible.

Consider the text “AAAAABCD”. Using ASCII, encoding this would require 64 bits. If, instead, we encode “A” with the bit pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we can encode this text in only 16 bits; the resulting bit pattern would be “0000000000011011”. This is still a fixed-length encoding, however; we’re using two bits per glyph instead of eight. Since the glyph “A” occurs with greater frequency, could we do better by encoding it with fewer bits? In fact we can, but in order to maintain a prefix-free encoding, some of the other bit patterns will become longer than two bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C” with “110”, and “D” with “111”. (This is clearly not the only optimal encoding, as it is obvious that the encodings for B, C and D could be interchanged freely for any given encoding without increasing the size of the final encoded message.) Using this encoding, the message encodes in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1 (that is, each bit in the final encoded message represents as much information as did 4.9 bits in the original encoding). Read through this bit pattern from left to right and you’ll see that the prefix-free encoding makes it simple to decode this into the original text even though the codes have varying bit lengths.

As a second example, consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and the space character both occur with the highest frequency, so they will clearly have the shortest encoding bit patterns in an optimal encoding. The letters “C”, "I’ and “N” only occur once, however, so they will have the longest codes.

There are many possible sets of prefix-free variable-length bit patterns that would yield the optimal encoding, that is, that would allow the text to be encoded in the fewest number of bits. One such optimal encoding is to encode spaces with “00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”, “I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding therefore requires only 51 bits compared to the 144 that would be necessary to encode the message with 8-bit ASCII encoding, a compression ratio of 2.8 to 1.
Input
The input file will contain a list of text strings, one per line. The text strings will consist only of uppercase alphanumeric characters and underscores (which are used in place of spaces). The end of the input will be signalled by a line containing only the word “END” as the text string. This line should not be processed.
Output
For each text string in the input, output the length in bits of the 8-bit ASCII
encoding, the length in bits of an optimal prefix-free variable-length encoding, and the compression ratio accurate to one decimal point.
Sample Input
AAAAABCD
THE_CAT_IN_THE_HAT
END
Sample Output
64 13 4.9
144 51 2.8
题意是哈夫曼的应用
给定一个字符串(只含有大写字母和下划线),求ASCII编码长度、哈夫曼编码长度和压缩率

#include<stdio.h>
#include<string.h>
#include<queue>
#include<vector>
#include<algorithm>
using namespace std;
#include<iostream>
char a[1002];
char c[10]="END";
int b[100];
int main()
{while(cin>>a){memset(b,0,sizeof(b));if(strcmp(a,c)==0)break;int l=strlen(a),i;for(i=0;i<l;i++){if(a[i]=='_')b[26]++;elseb[a[i]-'A']++;}priority_queue<int,vector<int>,greater<int> >q;for(i=0;i<=26;i++){if(b[i]){q.push(b[i]);}}int n1,n2,sum=0;while(q.size()>=2){n1=q.top();q.pop();n2=q.top();q.pop();//printf("%d%d\n",n1,n2);sum+=n1+n2;q.push(n1+n2);}if(sum==0)sum=l;printf("%d %d %.1f\n",l*8,sum,8.0*l/sum);}return 0;
}

这篇关于hdu 5.2.7 1053 Entropy的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/233518

相关文章

usaco 1.3 Mixing Milk (结构体排序 qsort) and hdu 2020(sort)

到了这题学会了结构体排序 于是回去修改了 1.2 milking cows 的算法~ 结构体排序核心: 1.结构体定义 struct Milk{int price;int milks;}milk[5000]; 2.自定义的比较函数,若返回值为正,qsort 函数判定a>b ;为负,a<b;为0,a==b; int milkcmp(const void *va,c

poj 3974 and hdu 3068 最长回文串的O(n)解法(Manacher算法)

求一段字符串中的最长回文串。 因为数据量比较大,用原来的O(n^2)会爆。 小白上的O(n^2)解法代码:TLE啦~ #include<stdio.h>#include<string.h>const int Maxn = 1000000;char s[Maxn];int main(){char e[] = {"END"};while(scanf("%s", s) != EO

hdu 2093 考试排名(sscanf)

模拟题。 直接从教程里拉解析。 因为表格里的数据格式不统一。有时候有"()",有时候又没有。而它也不会给我们提示。 这种情况下,就只能它它们统一看作字符串来处理了。现在就请出我们的主角sscanf()! sscanf 语法: #include int sscanf( const char *buffer, const char *format, ... ); 函数sscanf()和

hdu 2602 and poj 3624(01背包)

01背包的模板题。 hdu2602代码: #include<stdio.h>#include<string.h>const int MaxN = 1001;int max(int a, int b){return a > b ? a : b;}int w[MaxN];int v[MaxN];int dp[MaxN];int main(){int T;int N, V;s

hdu 1754 I Hate It(线段树,单点更新,区间最值)

题意是求一个线段中的最大数。 线段树的模板题,试用了一下交大的模板。效率有点略低。 代码: #include <stdio.h>#include <string.h>#define TREE_SIZE (1 << (20))//const int TREE_SIZE = 200000 + 10;int max(int a, int b){return a > b ? a :

hdu 1166 敌兵布阵(树状数组 or 线段树)

题意是求一个线段的和,在线段上可以进行加减的修改。 树状数组的模板题。 代码: #include <stdio.h>#include <string.h>const int maxn = 50000 + 1;int c[maxn];int n;int lowbit(int x){return x & -x;}void add(int x, int num){while

hdu 3790 (单源最短路dijkstra)

题意: 每条边都有长度d 和花费p,给你起点s 终点t,要求输出起点到终点的最短距离及其花费,如果最短距离有多条路线,则输出花费最少的。 解析: 考察对dijkstra的理解。 代码: #include <iostream>#include <cstdio>#include <cstdlib>#include <algorithm>#include <cstrin

hdu 2489 (dfs枚举 + prim)

题意: 对于一棵顶点和边都有权值的树,使用下面的等式来计算Ratio 给定一个n 个顶点的完全图及它所有顶点和边的权值,找到一个该图含有m 个顶点的子图,并且让这个子图的Ratio 值在所有m 个顶点的树中最小。 解析: 因为数据量不大,先用dfs枚举搭配出m个子节点,算出点和,然后套个prim算出边和,每次比较大小即可。 dfs没有写好,A的老泪纵横。 错在把index在d

hdu 1102 uva 10397(最小生成树prim)

hdu 1102: 题意: 给一个邻接矩阵,给一些村庄间已经修的路,问最小生成树。 解析: 把已经修的路的权值改为0,套个prim()。 注意prim 最外层循坏为n-1。 代码: #include <iostream>#include <cstdio>#include <cstdlib>#include <algorithm>#include <cstri

hdu 1285(拓扑排序)

题意: 给各个队间的胜负关系,让排名次,名词相同按从小到大排。 解析: 拓扑排序是应用于有向无回路图(Direct Acyclic Graph,简称DAG)上的一种排序方式,对一个有向无回路图进行拓扑排序后,所有的顶点形成一个序列,对所有边(u,v),满足u 在v 的前面。该序列说明了顶点表示的事件或状态发生的整体顺序。比较经典的是在工程活动上,某些工程完成后,另一些工程才能继续,此时