管道单线图插入的阀门太小_你的管道太大吗

2023-10-23 21:20

本文主要是介绍管道单线图插入的阀门太小_你的管道太大吗,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

管道单线图插入的阀门太小

A few years ago, I was involved in a consulting project with a large company in the healthcare industry that was in the middle of a data center migration. After the networks and servers were stood up at the new location they needed to migrate massive amounts of data in bulk so the company secured a pair of OC192 circuits, providing nearly 10Gbps of throughput in each direction on each circuit.

几年前,我参与了一家医疗保健行业的大型公司的咨询项目,该项目处于数据中心迁移的中间。 将网络和服务器放置在新位置之后,他们需要批量迁移大量数据,因此该公司保护了一对OC192电路,在每个电路的每个方向上提供了近10Gbps的吞吐量。

Everything seemed to be in order, so they began transferring data. To their surprise, they were only seeing throughput in the tens of megabits per second, even on servers connected to the network via gigabit Ethernet switches. After exhausting all the normal troubleshooting steps, they decided to bring in a fresh set of eyes. What we discovered may seem counter-intuitive: this company’s pipes were just too big. The company was suffering from a Long Fat Network (LFN).

一切似乎都井井有条,因此他们开始传输数据。 令他们惊讶的是,即使在通过千兆以太网交换机连接到网络的服务器上,他们也只能看到每秒数十兆的吞吐量。 在用尽了所有正常的故障排除步骤之后,他们决定重新审视。 我们发现的结果似乎违反直觉:该公司的管道太大。 该公司正遭受长胖网络(LFN)的困扰。

Layer 4 of the OIS model, the Transport Layer, provides numerous functions, including:

OIS模型的第4层(传输层)提供了许多功能,包括:

  • Segmentation of data. If the amount of data sent by an application exceeds the capability of the network, or of the sender or receiver’s buffer, the Transport Layer can split up the data into segments and send them separately.

    数据分割。 如果应用程序发送的数据量超出网络或发送者或接收者的缓冲区的能力,则传输层可以将数据拆分为多个段,然后分别发送。

  • Ordered delivery of segments. If a piece of data is broken up into multiple segments and sent separately, there is no guarantee the segments will arrive at the destination in the correct order. The Transport Layer is responsible for receiving all the segments and, if necessary, putting the data stream back together in the correct order.

    有序交付段。 如果一条数据分成多个段并分别发送,则不能保证这些段将以正确的顺序到达目的地。 传输层负责接收所有段,并在必要时按正确顺序将数据流放回原处。

  • Multiplexing. If a single computer is running multiple applications, the Transport Layer differentiates between them and ensures data arriving on the network is sent to the correct application.

    多路复用。 如果一台计算机正在运行多个应用程序,则传输层会在它们之间进行区分,并确保将到达网络的数据发送到正确的应用程序。

Some Transport Layer protocols also provide reliability via a mechanism to ensure confirmation of data delivery. The majority of data traversing IP networks today utilize TCP, and it is TCP’s reliability mechanism that is at the heart of the LFN problem.

一些传输层协议还通过一种机制来确保数据传输的可靠性,从而提供可靠性。 当今,大多数数据传输IP网络都使用TCP,而LFN问题的核心是TCP的可靠性机制。

When data is ready to be sent via the TCP protocol, the following sequence of events occur:

当准备好通过TCP协议发送数据时,将发生以下事件序列:

  1. TCP on the initiating computer establishes a connection with TCP on the remote computer.

    启动计算机上的TCP与远程计算机上的TCP建立连接。
  2. Each computer advertises its Window Size, which is the maximum amount of data that the other computer should send before pausing to wait for an acknowledgment. The advertised window size is typically related to the size of the computer’s receive buffer.

    每台计算机都会通告其窗口大小,这是另一台计算机在暂停等待确认之前应该发送的最大数据量。 广告窗口的大小通常与计算机的接收缓冲区的大小有关。
  3. TCP begins transmitting the data in intervals equal to the maximum segment size, or MSS (also negotiated by the hosts). Once the amount of data transmitted equals the window size, TCP pauses and waits for an acknowledgment. TCP will not send any more data until an acknowledgment has been received.

    TCP开始以等于最大段大小或MSS(也由主机协商)的时间间隔传输数据。 一旦传输的数据量等于窗口大小,TCP就会暂停并等待确认。 在收到确认之前,TCP不会再发送任何数据。
  4. If an acknowledgment is not received in a timely manner, TCP retransmits the data and once again pauses to wait for an acknowledgment.

    如果未及时收到确认,TCP将重新传输数据,并再次暂停以等待确认。

This send and wait method of reliability ensures that data has been delivered, and frees applications and their developers from having to reinvent the wheel every time they want to add reliability to their applications. However, this method lends itself to inefficiencies based on two factors: 1) how much data a computer sends before pausing, and 2) how long the computer has to wait to receive an acknowledgment. It is these two factors that are critical to understanding, and ultimately overcoming, the LFN problem.

这种可靠的发送和等待方法可确保数据已交付,并使应用程序及其开发人员不必在每次想为其应用程序增加可靠性时就重新发明轮子。 但是,此方法基于以下两个因素而导致效率低下:1)计算机在暂停之前发送了多少数据,以及2)计算机必须等待多长时间才能收到确认。 这两个因素对于理解并最终克服LFN问题至关重要。

We now have enough information to understand the LFN problem. TCP is efficient on Short Skinny Networks, but not on Long Fat Networks. The longer the network (i.e. the higher the latency), the longer TCP has to sit by twiddling its thumbs waiting for an acknowledgment before it can send more data. And the fatter the network (i.e. the faster a sender can serialize data onto the wire), the greater the percentage of time TCP is sitting by idly. When you put those two together — Longness and Fatness, or high latency and high bandwidth — TCP can become very inefficient.

现在,我们有足够的信息来了解LFN问题。 TCP在短的瘦网络上有效,而在长胖的网络上则无效。 网络时间越长(即延迟时间越长),TCP必须通过摆弄拇指等待确认的时间更长,才能发送更多数据。 网络越发胖(即发送方将数据串行化到线路上的速度越快),TCP闲置的时间百分比就越大。 当将“长度和胖度”或高延迟和高带宽放在一起时,TCP可能会变得非常无效率。

Here is an analogy. Let’s say you have a coworker who talks a lot but really slowly. When you are having a conversation face to face, he can pretty much just keep talking and talking and talking. He gets near-instant acknowledgment that you heard what he said, so he can just keep talking. There is very little dead air. This is equivalent to a Short Skinny network.

这是一个比喻。 假设您有一个同事说话很多,但是真的很慢。 当您面对面交谈时,他几乎可以继续交谈和交谈。 他几乎立即就知道您听到了他的话,因此他可以继续讲话。 几乎没有死气。 这等效于“短紧身”网络。

Now let’s say your coworker becomes an astronaut and flies to Mars. He calls you on his astronaut phone to tell you about the trip, and he is really excited so he talks really fast. But the delay is really long. Since he can’t see you, he decides that every 25 words he will pause and wait for you to respond before he continues speaking.

现在,假设您的同事成为一名宇航员并飞往火星。 他用他的宇航员电话打给你,告诉你这次旅行,他真的很兴奋,所以讲话速度很快。 但是延迟真的很长。 由于看不到你,他决定每隔25个字就会停下来,等你回应后再继续讲话。

Since your friend talks really fast, let’s say it only takes him five seconds to spit out 25 words before pausing to wait for a response. If the round trip delay between Earth and Mars is 10 seconds, he will only be able to speak 33% of the time. The other 67% of the time the line between you and the Martian is sitting idle.

由于您的朋友说话的速度非常快,因此可以说,在等待等待之前,他只花了五秒钟就吐出25个单词。 如果地球与火星之间的往返延迟为10秒,那么他将只能在33%的时间内讲话。 您和火星之间的线路有67%的时间处于空闲状态。

It wouldn’t be such a big deal if he didn’t speak so fast. If it took him two minutes to speak those same 25 words instead of blurting them out in five seconds, he’d be speaking for about 92% of the time. Likewise, if the round trip latency between you were lower, let’s say two seconds, the utilization percentage of the line would go up as well. In this case he would speak for five seconds and then pause for two, achieving a utilization of about 71%.

如果他不那么快说话,那不会有什么大不了的。 如果说他花了2分钟才能说相同的25个单词,而不是在5秒钟内使它们模糊不清,那么他将有92%的时间在讲话。 同样,如果您之间的往返延迟较低,比如说两秒钟,则该线路的利用率也将上升。 在这种情况下,他会讲话五秒钟,然后停顿两秒钟,利用率约为71%。

Let’s look at a real-world network scenario. Two computers, Computer A and Computer B, are located at two different sites that are connected by a T-3 link. The computers are connected to Gigabit Ethernet switches. The one-way latency is 70 milliseconds. Computer A initiates a data transfer to Computer B using an FTP PUT operation. The following sequence of events occurs (for the sake of simplicity, I will leave out some of the TCP optimizations that may occur in the real world):

让我们看一个现实世界的网络场景。 两台计算机,计算机A和计算机B,位于通过T-3链接连接的两个不同的站点上。 这些计算机已连接到千兆位以太网交换机。 单向延迟为70毫秒。 计算机A使用FTP PUT操作启动到计算机B的数据传输。 发生以下事件序列(为简单起见,我将省略一些在现实世界中可能发生的TCP优化):

  1. Computer A initiates a TCP connection to Computer B for the data transfer.

    计算机A启动到计算机B的TCP连接以进行数据传输。
  2. Each computer advertises a window size of 16,384 bytes, and an MSS of 1,460 bytes is negotiated.

    每台计算机通告的窗口大小为16384字节,并且协商了1460字节的MSS。
  3. Computer A starts sending data to Computer B. With an MSS of 1,460 bytes and a window size of 16,384 bytes, Computer A can send 11 segments before pausing to wait for an acknowledgment from Computer B.

    计算机A开始向计算机B发送数据。由于MSS为1,460字节,窗口大小为16,384字节,计算机A可以发送11个段,然后暂停以等待计算机B的确认。

So how efficient is our sample network? To figure this out, we need to calculate two numbers:

那么我们的样本网络效率如何? 为了弄清楚这一点,我们需要计算两个数字:

  1. The maximum amount of data that could be in flight on the wire at any given point in time. This is called the bandwidth-delay product. Think of it like an oil pipeline: how much oil is contained within a one mile stretch of pipe if the oil is flowing at 10mph and you are pumping 10 gallons per minute? (Answer: 60 gallons). In our example, the T-3 bandwidth is 44.736Mbps (or 5.592 megabytes per second) and the delay is 70 milliseconds. So the bandwidth-delay product is 5.592 x .07, or about 0.39MB (399.36KB). This means at any given point in time, if the T-3 link is totally saturated, there is 0.39MB of data in flight on the wire in each direction.

    在任何给定时间点,网络上可能正在传输的最大数据量。 这称为带宽延迟乘积。 可以把它想象成一条输油管道:如果一根油以10 mph的速度流动并且每分钟抽10加仑油,那么一英里长的管道中将包含多少油? (答案:60加仑)。 在我们的示例中,T-3带宽为44.736Mbps(或每秒5.592兆字节),延迟为70毫秒。 因此,带宽延迟乘积为5.592 x .07或大约0.39MB(399.36KB)。 这意味着,在任何给定的时间点,如果T-3链路完全饱和,则每个方向的线路上将有0.39MB的数据正在传输。
  2. The amount of data actually transmitted by Computer A before pausing to wait for an acknowledgment. In our example, Computer A sends 11 segments, each being 1460 bytes. So Computer A can only send 1460 x 11 = 16,060 bytes (15.68KB) before having to pause and wait for an acknowledgment from Computer B.

    在暂停等待确认之前,计算机A实际传输的数据量。 在我们的示例中,计算机A发送11个段,每个段为1460字节。 因此,计算机A只能发送1460 x 11 = 16,060字节(15.68KB),然后才能暂停并等待计算机B的确认。

So, if the network link could support 399.36KB at any given point, but Computer A can only put 15.6KB on the wire before pausing to wait for an acknowledgment, the efficiency is only 3.9%. That means that the link is sitting idle 96.1% of the time!

因此,如果网络链路在任何给定点都可以支持399.36KB,但是计算机A在暂停等待确认之前只能在网上放置15.6KB,效率只有3.9%。 这意味着链接有96.1%的时间处于空闲状态!

解决方案 (Solutions)

Do you see the problem? In some cases, TCP sacrifices performance for the sake of reliability, particularly when latency and/or bandwidth is relatively high. But is it possible to achieve both performance and reliability? Can we have our cake and eat it too?

看到问题了吗? 在某些情况下,TCP为了可靠性而牺牲性能,尤其是在等待时间和/或带宽相对较高时。 但是,能否同时实现性能和可靠性? 我们可以吃我们的蛋糕吗?

Yes, and we’ll look at the options in a moment. But first, let’s look at the two obvious but unrealistic solutions:

是的,我们稍后将介绍这些选项。 但首先,让我们看一下两个显而易见但不切实际的解决方案:

  1. Decrease latency. If we could decrease the amount of time it takes for a bit to make it from one side of the network to the other, computers wouldn’t have to go get a cup of coffee every time they send a TCP Window’s worth of data. But until someone comes out with the Quantum Wormhole router, or figures out how to increase the speed of light or bend space, you’re probably stuck with the latency you’ve got.

    减少延迟。 如果我们可以减少从网络的一侧到另一侧的连接所花费的时间,则计算机不必每次发送TCP Window的数据就去喝咖啡。 但是,除非有人问出Quantum Wormhole路由器,或者想出如何提高光速或弯曲空间的方法,否则您可能会一直滞留在等待时间上。

  2. Decrease throughput. If we turn a Fat network into a Skinny network without changing the latency or the TCP window size, it stands to reason that link utilization would go up. But what kind of network engineer ever wanted less bandwidth?

    降低吞吐量。 如果我们在不更改延迟或TCP窗口大小的情况下将Fat网络转变为Skinny网络,则很可能会导致链路利用率上升。 但是,什么样的网络工程师曾经想要更少的带宽?

OK, now that we’ve got that out of the way, let’s look at the real solutions:

好的,现在我们已经解决了这个问题,让我们看一下真正的解决方案:

TCP window scaling. One might wonder why the TCP window size field is only 16 bits long, allowing for a maximum of a 65,535 byte window. But remember that TCP’s reliability mechanism was written in a day when data link bandwidth was measured in bits. Today, 10 Gigabit links are common (and 40G and 100Gbps links are becoming more common).

TCP窗口缩放。 有人可能想知道为什么TCP窗口大小字段只有16位长,最多允许65,535个字节的窗口。 但是请记住,TCP的可靠性机制是在一天之内编写的,当时数据链路带宽是以位为单位进行测量的。 如今,常见的有10个千兆链路(并且40G和100Gbps链路正变得越来越普遍)。

RFC 1323, titled “TCP Extensions for High Performance,” was published in 1992 to address some of the performance limitations of the original TCP specification in a world of ever increasing bandwidth. In particular, TCP Option 3, titled “Window Scaling,” addressed the 65,535 byte window size limitation. Rather than increasing the window size field in the TCP header to a number larger than 16 bits (and thus rendering it incompatible with existing implementations), Option 3 introduces a value by which the TCP window size is bitwise shifted to the left. A value of 1 shifts the 16 bits to the left by 1 bit, doubling the window size. A value of 2 shifts the 16 bits to the left by 2 bits, quadrupling the window size. The maximum value of 14 shifts the 16 bits to the left by 14, increasing the window size by 2¹⁴.

RFC 1323,标题为“高性能的TCP扩展”,于1992年发布,旨在解决带宽不断增加的世界中原始TCP规范的一些性能限制。 尤其是,名为“窗口缩放”的TCP选项3解决了65,535字节的窗口大小限制。 选项3引入的值不是将TCP报头中的窗口大小字段增加到大于16位的数字(因此使其与现有实现不兼容),而是引入了一个值,通过该值,TCP窗口大小向左移。 值为1将16位向左移动1位,使窗口大小加倍。 值为2会将16位向左移动2位,使窗口大小增加四倍。 最大值14将16位左移14位,从而将窗口大小增加了2 1/3。

Increasing the window size has the obvious benefit of allowing TCP to send more segments before pausing to wait for a response. However, this performance benefit comes with some risk, such as buffer issues and larger retransmits when segments are lost. Virtually every modern operating system in use today uses TCP window scaling by default, so if you’re seeing small window sizes on the network, you may need to do some troubleshooting. Are there any firewalls or IPS devices on the network stripping TCP options? Are hosts scaling back the window size due to buffers filling up or excessive packet loss?

增大窗口大小的明显好处是允许TCP在暂停等待响应之前发送更多的段。 但是,这种性能优势会带来一些风险,例如缓冲区问题和段丢失时更大的重传次数。 默认情况下,实际上当今使用的每个现代操作系统都默认使用TCP窗口缩放,因此,如果网络上的窗口很小,则可能需要进行一些故障排除。 网络上是否有任何防火墙或IPS设备剥离了TCP选项? 主机是否由于缓冲区填满或过多的数据包丢失而缩小窗口大小?

Multiple TCP sessions. The problem described here applies to a single TCP session only. In the earlier example, Computer A’s TCP session was only utilizing 3.9% of the link’s bandwidth. If 25 computers were transmitting, each using a single TCP session, a link utilization of 97.5% could be achieved. Or, if Computer A was able to open 25 TCP sessions simultaneously, the same utilization could be achieved. This will almost never be a good solution to the problem at hand, but is included here for completeness.

多个TCP会话。 此处描述的问题仅适用于单个TCP会话。 在较早的示例中,计算机A的TCP会话仅使用链路带宽的3.9%。 如果有25台计算机正在传输,每台计算机都使用一个TCP会话,则可以实现97.5%的链接利用率。 或者,如果计算机A能够同时打开25个TCP会话,则可以实现相同的利用率。 这几乎永远不会是解决当前问题的好方法,但是为了完整起见,此处将其包括在内。

Different transport layer protocol. TCP isn’t the only transport layer protocol available. TCP’s unreliable cousin, the User Datagram Protocol (UDP), does not provide any guarantee of delivery, and is therefore free to consume all available resources.

不同的传输层协议。 TCP不是唯一可用的传输层协议。 TCP的不可靠表亲,即用户数据报协议(UDP),不提供任何传递保证,因此可以自由使用所有可用资源。

Caching. Content caching utilizes proxies to store data closer to the client. The first client to access the data “primes” the cache, while subsequent requests for the same data are served from the local proxy. Content caching is a band-aid solution that is becoming increasingly obsolete in an age of constantly changing and dynamically created content, but it is still worth mentioning.

正在缓存。 内容缓存利用代理将数据存储在距离客户端更近的位置。 第一个访问数据的客户端“准备好”缓存,而随后对相同数据的请求则由本地代理服务。 内容缓存是一个创可贴解决方案,在不断变化和动态创建内容的时代,它已变得越来越陈旧,但仍然值得一提。

Edge computing. Paid services like Akamai decentralize content and push it to the edges of the network, as close to the clients as possible. One of the results is lower latency between clients and servers.

边缘计算。 诸如Akamai之类的付费服务将内容分散化,并将其推送到网络的边缘,并尽可能靠近客户端。 结果之一是减少客户端和服务器之间的延迟。

WAN optimization and acceleration. A variety of products exist that employ various techniques such as data deduplication, compression, and dictionary templating to increase the perceived performance of a WAN link.

WAN优化和加速。 存在使用各种技术(例如,重复数据删除,压缩和字典模板化)以提高WAN链接的感知性能的各种产品。

结论 (Conclusion)

Earlier I said the company in question here had too much bandwidth. Well that wasn’t really the case. Their real problem was the behavior of the protocol they were using. One of their system admins, at the direction of a software developer, had monkeyed with the configuration of their servers in an attempt to tune application performance. The application wasn’t able to pick up packets from the buffer fast enough (due to a software bug), causing the TCP window to scale back the window size, but not before buffers were filled, packets were dropped, and retransmits occurred. In an attempt to eliminate the retransmits, this admin had statically set the TCP window size on the servers to a relatively low value. Since TCP was being artificially constrained, it was never able to scale up to fill the available bandwidth of the data link.

之前我说过这里的公司带宽太大。 嗯,事实并非如此。 他们真正的问题是他们使用的协议的行为。 他们的一名系统管理员在软件开发人员的指导下,忙于调整服务器的配置,以尝试调整应用程序性能。 该应用程序不能足够快地从缓冲区中提取数据包(由于软件错误),从而导致TCP窗口缩小窗口大小,但是在缓冲区被填充,数据包被丢弃以及发生重传之前,该窗口无法缩放。 为了消除重传,该管理员已将服务器上的TCP窗口大小静态设置为相对较低的值。 由于TCP是人为限制的,因此它永远无法扩展以填充数据链路的可用带宽。

Today, barring misconfiguration, most networks won’t run into LFN problems because TCP window scaling is widely used by default. However, with bandwidth ever on the rise, performance will most certainly become more and more of an issue because latency is usually a fixed variable (unless you figure out how to bend space-time), and we are likely to see similar symptoms. Thoroughly understanding the protocols we use every day will help network engineers become awesomer!

如今,除非配置错误,否则大多数网络都不会遇到LFN问题,因为默认情况下广泛使用TCP窗口缩放。 但是,随着带宽的不断增长,性能无疑将成为越来越重要的问题,因为延迟通常是一个固定变量(除非您弄清楚如何改变时空),而且我们很可能会看到类似的症状。 全面了解我们每天使用的协议将帮助网络工程师变得更加出色!

This story was originally published on Network World and has been updated here by the author.

这个故事最初发表在《 网络世界 》上 ,作者在此进行了更新。

Image for post

Website: www.maveris.com

网站: www.maveris.com

Email: info@maveris.com

电子邮件:info@maveris.com

Maveris exists to help your organization reach its fullest potential by providing thought leadership in IT and cyber so you can connect fearlessly. To learn more visit us at: www.maveris.com

Maveris的存在是为了通过提供IT和网络方面的思想领导力来帮助您的组织发挥最大潜力,从而使您无所畏惧地进行联系。 要了解更多信息,请访问: www.maveris.com

翻译自: https://medium.com/maverislabs/are-your-pipes-too-big-c3e78e769161

管道单线图插入的阀门太小


http://www.taodudu.cc/news/show-8046200.html

相关文章:

  • 高中练笔展示:劳动之分
  • 分工、演化與人机功能分配综述
  • java编程分工_这3个并发编程的核心,你一定要知道!
  • 【转】谈分工
  • 14项管理
  • 谈谈分工
  • 斯密的分工理论(转载)
  • 信息技术的发展,是社会分工不断深化,不断分离和整合的结果
  • apt-get安装中的E: Sub-process /usr/bin/dpkg returned an error code
  • 解决mariadb/mysql数据目录放在/usr/local或者/home下,启动失败问题.
  • 如何在PostgreSQL中获取awr报告
  • mysql awr报告生成_Oracle-AWR报告简介及如何生成【转】
  • python依赖包_Python一键安装全部依赖包的方法
  • 针对windows+python3.7环境下安装gpu版Torch出现的问题
  • wampserver框架的搭建以及使用方法简介
  • 【Linux】利用linux文件系统权限控制文件访问
  • 除了Matplotlib,Python还有这些可视化工具(一)
  • 洛谷 P2692 覆盖
  • 整合基于WSR-88D双极化雷达基数据发表的相关学术文章
  • 爽翻天的opera和华为wsr20
  • WSR-88D天气雷达工作模式、监测目标、反射率含义讲解
  • WSR-88D双极化雷达识别各种水凝物的基数据范围值详解
  • 面向汽车制造企业供应商选择的 WSR 评价模型研究
  • 容易忽略的相近词,检验我们生命的成色
  • 1017. 相似的RGB颜色
  • 【LeetCode - 800】相似 RGB 颜色
  • java 颜色相近_【LeetCode(Java) - 800】相似 RGB 颜色
  • RSA公私钥生成工具,pkcs8和pkcs1,1024和2048位
  • PowerBasic 生成SHA1密码 32位 20位
  • guns代码生成器中文启动
  • 这篇关于管道单线图插入的阀门太小_你的管道太大吗的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



    http://www.chinasem.cn/article/270515

    相关文章

    HTML文档插入JS代码的几种方法

    在HTML文档里嵌入客户端JavaScript代码有4中方法: 1.内联,放置在< script>和标签对之间。 2.放置在由< script>标签的src属性指定的外部文件中。 3.放置在HTML事件处理程序中,该事件处理程序由onclick或onmouseover这样的HTML属性值指定。 4.放在一个URL里,这个URL使用特殊的“javascript:”协议。 在JS编程中,主张

    C# 命名管道中客户端访问服务器时,出现“对路径的访问被拒绝”

    先还原一下我出现错误的情景:我用C#控制台写了一个命名管道服务器,然后用ASP.NET写了一个客户端访问服务器,运行之后出现了下面的错误: 原因:服务器端的访问权限不够,所以是服务器端的问题,需要增加访问权限。(网上很多都说是文件夹的权限不够,情况不同,不适用于我这种情况) 解决办法: (1)在服务器端相应地方添加以下代码。 PipeSecurity pse = new PipeSec

    插入用户APC

    每个_Kthread都有一个成员Alerted,默认为0,表示是否可以被APC唤醒。所以下面这段程序,即使插入了APC,但是t线程仍然不会执行。 让t线程执行APC函数的方法是使t线程变成可被唤醒状态,使用函数SleepEx(时间,是否可以唤醒线程),第二个参数为true,Alerted设置为1,即可被唤醒;在插入APC时,APC函数就会执行。 #include "stdafx.h"#inc

    Rmarkdown的PDF文件插入本地图片

    借助在其他端上的文件时不可以的,测试成功,在本地指定文件路径即可,如下: ![Caption for the picture.](/home/rstudio/1.png)

    HTML:认识img标签,为网页插入图片

    认识<img>标签,为网页插入图片 在网页的制作中为使网页炫丽美观,肯定是缺少不了图片,可以使用<img>标签来插入图片。 语法: <img src="图片地址" alt="下载失败时的替换文本" title = "提示文本"> 举例: <img src = "myimage.gif" alt = "My Image" title = "My Image" /> 讲解: 1、src:

    C语言| 数组的插入

    在下标为index的位置插入一个数字 1 定义数组a,数组b存放插入元素后的数组,下标index 值num 循环变量i 2 输入要插入的位置下标和数值 3 for循环 嵌套if多层语句   if数组的最大下标i < index,说明插入元素的位置在数组中不存在,系统随机分配一个垃圾值   else if 数组的最大下标i == index,说明把新元素插入到数组最后面,无需移动位置   else

    文件太大了,怎么办,吃了它?分了它?我帮您想想办法。

    文件太大了,怎么办?          在开发进程中,众人往往会因文件传输问题而倍感困扰。文件体积过大,占用带宽尚属小事,更棘手的是传输中途突然中断,而后又得重新开始。不过,断点续传算是解决此难题的途径之一。此外,还有一法,即将一个大型文件拆分为多个种子,待全部传送完毕,再进行组装。下面,我以 Java 语言为例加以说明。 1、拆分文件代码 import java.io.*;public

    [SAP ABAP] 插入内表数据

    语法格式 INSERT <wa> INTO <itab> INDEX <idx>. <wa>:代表工作区 <itab>:代表内表 <idx>:代表索引值 示例1 结果显示: 语法格式 INSERT <wa> INTO TABLE <itab>. <wa>:代表工作区 <itab>:代表内表 示例2 结果显示: 提示Tips:如果定义的是标准表,使用I

    mysql插入blob或longblob的字符串

    mysql字段格式是blob的时候,直接插入字符串会异常 insert table (str),values ('aaaa') 会异常 需要将字符串转为0x格式的16进制字符串才行 aaaa转换之后为61616161 insert table (str),values (0x61616161) java将字符串转16进制字符串 "0x"+DatatypeConvert

    使用python下载图片且批量将图片插入word文档

    最近有一个小的功能实现,从小某书上下载指定帖子的图片们,然后批量插入到word文档中,便于打印。于是有了以上需求。 一、下载图片 1、首先获取图片们的链接img_urls 首先,获取到的指定帖子的所有信息可以存入一个json文件中,如下样式: 读取这个json文件,获取title和image_list。 def read_json_file(file_path):with open(