Computer science team shows talent at prestigious programming contest · News · Lafayette College

Members of Lafayette’s team ranked higher than teams from such strong institutions as UC Berkeley and Cornell

By Bryan Hay

A Lafayette College computer science team, competing in the largest programming competition participated by all major universities and colleges in the world, performed with distinction and exceeded expectations.

(LR) Prof.  Frank Xia, team advisor, Eliso Morazara '25, Peter Li '23, and Lekso Borashvili '23

(LR) Prof. Frank Xia, team advisor, Eliso Morazara ’25, Peter Li ’23, and Lekso Borashvili ’23

Each year, more than 2,500 teams from North America compete in regional and divisional programming contests. Among them, approximately 50 top teams are invited to the North America Championship to compete for the advancement to the World Finals.

For the first time in the College’s history, a Lafayette team earned a place in the International Collegiate Programming Contest (ICPC) North America Championship and North America Programming Camp (NAC-NAPC), held May 26-31 at the University of Central Florida.

Team Lafayette 1 participated in a wide range of activities, learning problem-solving techniques in training sessions, honing skills in practice competitions, and exploring future opportunities at career fairs.

Members of Team Lafayette 1 are Lekso Borashvili ’23, Peter Li ’23and Claire Liu ’23. Due to a scheduling conflict, Liu was substituted by Eliso Morazara ’25 at the event. Lafayette College was one of three liberal arts colleges invited to the NAC along with Swarthmore College and Carleton College.

The team participated in a programming competition organized by the National Security Agency (NSA), in which they tried to solve real-world problems in cybersecurity and cryptography. Team Lafayette 1 ranked 25th among 50 teams in the NSA Challenge Competition.

During the five-hour final programming competition at the ICPC North America Championship (NAC), which took place May 30, teams attempted to solve most problems within the shortest possible time. Among 50 teams, the Lafayette team ranked 29th with four problems solved.

  (LR): Peter Li '23, Eliso Morazara '25, and Lekso Borashvili '23

(LR): Peter Li ’23, Eliso Morazara ’25, and Lekso Borashvili ’23

“Considering the level of competition, this achievement was above my expectations,” says Frank Xia, associate professor of computer science and team adviser. “Despite their relative inexperience, the members of the Lafayette team ranked higher than teams from such strong institutions as UC Berkeley and Cornell.”

More importantly, the competition was a transformative experience for the students, he says.

“They had the opportunities to learn from the coaches and students in top institutions,” Xia notes. “They tested their skills in the highest-level competition and gained confidence in themselves. It strongly motivates them to strive for better results in the future.”

The competition encouraged Xia to build on the strengths of Lafayette’s computer science department and improve competitive programming at Lafayette.

“In the short term, I plan to attract more interest in competitive programming from a wider audience,” he added. “This will include activities such as expanding our training program, creating a competitive programming club, and holding regular practice competitions.”

In the long term, Xia said he would like to build an infrastructure for recruiting and training students.

“Many of the successful institutions have courses on competitive programming, either as electives or special topics courses,” he says. “Creating such a course will not only provide regular training for our competition team members, but it will also improve other students’ knowledge and skill levels in data structures and algorithms.”

Learn more and read the final results from NAC.

Developers beware: AI pair programming comes with pitfalls

AI pair programming tools, designed to speed up development, bring benefits ranging from suggestions for simple lines of code to the ability to build and deploy entire applications, but the pitfalls are significant.

In addition to improving productivity by alleviating some of the more mundane coding tasks, developers who use AI pair programming tools experience less frustration and can focus on more satisfying work, according to a GitHub survey of 2,000 developers. An array of these tools exist, including this year’s releases GitHub Copilot, Amazon CodeWhisperer and Tabnine. They joined a long list of existing AI-powered bots such as Kite Team Server, DeepMind’s AlphaCode and IBM’s Project CodeNet.

While AI pair programming shows promise in generating predictable, template-like code — reusable code snippets such as conditional statements or loops — developers should question the quality and suitability of code suggestions, said Ronald Schmelzer, managing partner with the CPMAI AI project management certification at Cognilytica.

“It runs into lots of problems around whether or not the code is applicable, security holes and bugs, and myriads of copyright issues,” he said.

Pitfalls of AI pair programming

Despite the apparent benefits — many of which were outlined in the GitHub survey — developers should be wary of AI-suggested code completions because they aren’t guaranteed to be accurate, said Chris Riley, senior manager of developer relations at marketing tech firm HubSpot. Developers must closely review any suggestions, which can negate any time saved searching developer sites for code snippets, he said.

Another area of ​​concern is supportability, Riley said. If a significant percentage of the code is AI-suggested, developers may not be able to support that code if it is the source of a production issue, he said.

In addition to questions concerning applicability and supportability, code completion bots introduce unique security concerns. While some code completion tools such as Kite Team Server can run behind a company’s firewall, others rely on public artifact repositories, which may be insecure, Riley said. For example, it may be possible for attackers to exploit the model to sneak in zero-day vulnerabilities, he said.

Community-provided code adds another potentially significant stumbling block: copyright issues. As AI pair programming tools are trained on a wide range of code with various licensing agreements, it becomes difficult to ascertain ownership, Cognilytica’s Schmelzer said. In addition, if the code generator is being trained on data from a shared code repository — especially GitHub — then developers could mix copyrighted or private code with public code without any identified source, he said.

The rise of AI pair programming

Many of the issues with modern AI pair programming tools weren’t present in early code completion products, such as Microsoft’s IntelliSense, which was first introduced in 1996. These tools gave developers simple type-ahead completion within the compiler or IDE, without a public repository vulnerabilities or supportability concerns. Developers could take this basic code completion a step further with linters — tools that can prevent simple syntax errors — to check the suggested code, Riley said.

I don’t think we are at the point where these tools can be used beyond rapid prototyping, education and suggestions.

Chris RileySenior manager of developer relations, HubSpot

“I don’t think developers at this point had any expectations outside of that, and we were happy with the Google-style suggestions as you typed,” Riley said. “It was there to increase efficiency, not to be the initial source of the code.”

Modern AI pair programmers go beyond simple code completion and linting into suggesting full blocks of code, Riley said. The tools can provide contextual code completions or write complete functions; Advanced text generators powered by OpenAI’s GPT-3 — such as Copilot — can build and deploy entire applications and transform simple English queries into SQL statements that work across databases.

“After being a longtime skeptic of the genuineness of the AI-driven code completion tools, I’ll have to admit it seemed surreal the first time I tried [Copilot],” said Anthony Chavez, founder and CEO of Codelab303. “I feel like it could read my mind at times.”

But despite the technological advances, the issues surrounding modern AI code completion tools mean they’re limited in their utility, Riley said.

“I don’t think we are at the point where these tools can be used beyond rapid prototyping, education and suggestions,” he said.