Copilot Chat, Gemini, Grok, Chat GPT, or Replit — what is the best generative AI chatbot for a front-end developer?

Igor Buts

by Igor Buts

Best AI chatbots for front-end developers R1hbapm

In this article, we will compare popular generative AI chatbots to understand which ones are the best suited for solving common frontend developer tasks. I decided to test chatbots against the tasks that I often do as a frontend developer and those where I've found AI chatbots to be particularly helpful.

After reading this article, I hope you will have more information to choose an AI chatbot for you, either free or paid.

1/2
    • The list of chatbots and their LLMs tested

    Paid

    • Chat GPT 4 ($20/month*)
    • Gemini Advanced ($20/month*, as part of The Google One AI Premium plan, which also includes other features)
    • GitHub Copilot Chat ($10.00/month*)
    • Grok ($16.00/month* as part of X Premium Plus subscription) 
    • Replit Advanced ($20.00/month*)

    * The prices are actual at the time of publishing this article.

    • The list of chatbots and their LLMs tested

    Free

    • Chat GPT 3.5
    • Gemini
    • Replit Basic

The research started in February and finished in March 2024. Please note that these results could become obsolete after some time.

List of tasks:
  • Prepare sample data (mock) based on the example provided.
  • Write documentation (JsDoc) for a React hook.
  • Write a function that does some task and covers it with unit tests.
  • Implement a Typescript interface for the Javascript object provided.
  • Perform code refactoring.
  • Implement a React UI component with styles and animation.

Calculation rules:

  • We have eight chatbots as contendents, so in each task, the first place will receive 8 points, the second - 7 points etc., and the last - 1 point.
  • The first place in the task will take the chatbot, which provides the correct answer with the least number of prompts.
  • Few chatbots could be taking the same place if their solutions brought the same value for the same amount of prompts.
  • All chatbots completed the same task on the same day. And the first original answer of the chatbots was used. This was made to test the chatbot's performance at the same point in time. Knowing that AI models can answer differently to the same question over time or that LLMs could be updated, please note that you could also receive a different answer when asking any question from the examples below.

1. Prepare sample data (mock) based on the example provided

I often need such mock data when working on some front-end tasks while the backend API is still in progress. Previously, creating lists with repeatable mock data that would look real and variable took some time. Now AI’s help me to do this much faster.

Task: Prepare the sample data. Return `const events = [...]` with 5 items in the format as below. Make sure there is at least one event with the `status: ONGOING` and one with the `status: PAST`. Set dates for events taking into account `status` and assuming that today is `08-02-2024` (date format 'dd-mm-yyyy'). Put events in an array sorted by date ascending. No other text is needed in the answer, just the code.

const events = [{
  id: 1,
  title: 'Employee Onboarding',
  startTime: '10:00',
  endTime: '11:00',
  date: '09-02-2024',
  host: {
    id: 12313,
    name: 'John Doe',
  },
  attendeesCount: 5,
 maxAttendees: 10,
  meetingLink: 'https://meet.google.com/abc-defg-hij',
  status: 'UPCOMING',
  state: 'AZ',
}]

Solutions by places:

First place, 8 points: Gemini Advanced completed the task from the first prompt. The hardest part for chatbots was setting the correct date for the ONGOING event according to the task description, which Gemini Advanced did well. Also, the data variety is good here: all titles look logical, and the start and end times differ among the items.

const events = [
  {
    id: 1,
    title: 'Project Retrospective',
    startTime: '13:00',
    endTime: '14:30',
    date: '06-02-2024',
    host: {
      id: 9876,
      name: 'Jane Smith',
    },
    attendeesCount: 8,
    maxAttendees: 12,
    meetingLink: 'https://meet.google.com/xyz-mnop-qrs',
    status: 'PAST',
    state: 'CA',
  },
  {
    id: 2,
    title: 'Team Brainstorming',
    startTime: '09:30',
    endTime: '10:30',
    date: '08-02-2024',
    host: {
      id: 3456,
      name: 'Alex Johnson',
    },
    attendeesCount: 3,
    maxAttendees: 6,
    meetingLink: 'https://meet.google.com/jkl-opqr-stu',
    status: 'ONGOING',
    state: 'NY',
  },
  {
    id: 3,
    title: 'Quarterly Planning',
    startTime: '14:00',
    endTime: '16:00',
    date: '09-02-2024',
    host: {
      id: 7890,
      name: 'Sarah Williams',
    },
    attendeesCount: 10,
    maxAttendees: 15,
    meetingLink: 'https://meet.google.com/ghi-jklm-nop',
    status: 'UPCOMING',
    state: 'TX',
  },
  {
    id: 4,
    title: 'Client Presentation',
    startTime: '11:00',
    endTime: '12:00',
    date: '10-02-2024',
    host: {
      id: 5432,
      name: 'Mark Anderson',
    },
    attendeesCount: 2,
    maxAttendees: 5,
    meetingLink: 'https://meet.google.com/def-ghij-klm',
    status: 'UPCOMING',
    state: 'FL',
  },
  {
    id: 5,
    title: 'Code Review',
    startTime: '15:00',
    endTime: '16:30',
    date: '12-02-2024',
    host: {
      id: 6789,
      name: 'Emily Parker',
    },
    attendeesCount: 4,
    maxAttendees: 7,
    meetingLink: 'https://meet.google.com/opq-rstu-vwx',
    status: 'UPCOMING',
    state: 'WA',
  },
];

Second place, 7 points: Copilot Chat has done this with one prompt, too, but the data variety is worse (e.g., all events have the same start and end times, which won’t help testing different times on the UI), so it takes second place.

const events = [
  {
    id: 1,
    title: 'Employee Onboarding',
    startTime: '10:00',
    endTime: '11:00',
    date: '07-02-2024',
    host: {
      id: 12313,
      name: 'John Doe',
    },
    attendeesCount: 5,
    maxAttendees: 10,
    meetingLink: 'https://meet.google.com/abc-defg-hij',
    status: 'PAST',
    state: 'AZ',
  },
  {
    id: 2,
    title: 'Team Meeting',
    startTime: '10:00',
    endTime: '11:00',
    date: '08-02-2024',
    host: {
      id: 12314,
      name: 'Jane Doe',
    },
    attendeesCount: 7,
    maxAttendees: 15,
    meetingLink: 'https://meet.google.com/abc-defg-hik',
    status: 'ONGOING',
    state: 'AZ',
  },
  {
    id: 3,
    title: 'Project Kickoff',
    startTime: '10:00',
    endTime: '11:00',
    date: '09-02-2024',
    host: {
      id: 12315,
      name: 'Bob Smith',
    },
    attendeesCount: 10,
    maxAttendees: 20,
    meetingLink: 'https://meet.google.com/abc-defg-hil',
    status: 'UPCOMING',
    state: 'AZ',
  },
  {
    id: 4,
    title: 'Design Review',
    startTime: '10:00',
    endTime: '11:00',
    date: '10-02-2024',
    host: {
      id: 12316,
      name: 'Alice Johnson',
    },
    attendeesCount: 8,
    maxAttendees: 15,
    meetingLink: 'https://meet.google.com/abc-defg-him',
    status: 'UPCOMING',
    state: 'AZ',
  },
  {
    id: 5,
    title: 'Code Review',
    startTime: '10:00',
    endTime: '11:00',
    date: '11-02-2024',
    host: {
      id: 12317,
      name: 'Charlie Brown',
    },
    attendeesCount: 6,
    maxAttendees: 10,
    meetingLink: 'https://meet.google.com/abc-defg-hin',
    status: 'UPCOMING',
    state: 'AZ',
  },
]

Third place, 6 points: ChatGPT 3.5. It took me two prompts to get the desired result from ChatGPT 3.5 because the events were not sorted by date after the first one. After asking, “Why events in the array are not sorted by date ascending?” It returned the code below that meets the requirements:

const events = [
  {
    id: 5,
    title: "Product Launch",
    startTime: "11:00",
    endTime: "12:00",
    date: "05-02-2024",
    host: {
      id: 24680,
      name: "Sarah Williams",
    },
    attendeesCount: 25,
    maxAttendees: 30,
    meetingLink: "https://meet.google.com/zzz-yyy-xxx",
    status: "PAST",
    state: "CA",
  },
  {
    id: 3,
    title: "Project Review",
    startTime: "09:30",
    endTime: "10:30",
    date: "06-02-2024",
    host: {
      id: 54321,
      name: "Emily Johnson",
    },
    attendeesCount: 12,
    maxAttendees: 20,
    meetingLink: "https://meet.google.com/pqr-stu-vwx",
    status: "PAST",
    state: "NY",
  },
  {
    id: 2,
    title: "Team Meeting",
    startTime: "14:00",
    endTime: "15:00",
    date: "08-02-2024",
    host: {
      id: 98765,
      name: "Jane Smith",
    },
    attendeesCount: 8,
    maxAttendees: 15,
    meetingLink: "https://meet.google.com/xyz-uvw-rst",
    status: "ONGOING",
    state: "CA",
  },
  {
    id: 1,
    title: "Employee Onboarding",
    startTime: "10:00",
    endTime: "11:00",
    date: "09-02-2024",
    host: {
      id: 12313,
      name: "John Doe",
    },
    attendeesCount: 5,
    maxAttendees: 10,
    meetingLink: "https://meet.google.com/abc-defg-hij",
    status: "UPCOMING",
    state: "AZ",
  },
  {
    id: 4,
    title: "Training Session",
    startTime: "13:00",
    endTime: "15:00",
    date: "12-02-2024",
    host: {
      id: 13579,
      name: "Michael Brown",
    },
    attendeesCount: 6,
    maxAttendees: 10,
    meetingLink: "https://meet.google.com/ghi-jkl-mno",
    status: "UPCOMING",
    state: "TX",
  },
];

4th place, 5 points: Replit Advanced. It took two prompts to get a result that met the requirements. After the first prompt, the ongoing event was on the wrong date. After the second – that was fixed, but the title of the event with the type “Ongoing” was “Ongoing Event”, that's why this is a little bit worse than ChatGPT 3.5.

const events = [
  {
    id: 1,
    title: 'Project Planning',
    startTime: '10:00',
    endTime: '12:00',
    date: '06-02-2024',
    host: {
      id: 23123,
      name: 'Alice Johnson',
    },
    attendeesCount: 8,
    maxAttendees: 20,
    meetingLink: 'https://meet.google.com/aaa-bbbb-cccc',
    status: 'PAST',
    state: 'CA',
  },
  {
    id: 2,
    title: 'Marketing Strategy Discussion',
    startTime: '13:00',
    endTime: '14:00',
    date: '07-02-2024',
    host: {
      id: 23124,
      name: 'Bob Williams',
    },
    attendeesCount: 15,
    maxAttendees: 50,
    meetingLink: 'https://meet.google.com/ddd-eeee-ffff',
    status: 'PAST',
    state: 'WA',
  },
  {
    id: 3,
    title: 'Ongoing Event',
    startTime: '09:00',
    endTime: '10:30',
    date: '08-02-2024',
    host: {
      id: 23125,
      name: 'Charlie Brown',
    },
    attendeesCount: 20,
    maxAttendees: 30,
    meetingLink: 'https://meet.google.com/ggg-hhhh-iiii',
    status: 'ONGOING',
    state: 'TX',
  },
  {
    id: 4,
    title: 'New Product Launch Prep',
    startTime: '11:00',
    endTime: '12:00',
    date: '09-02-2024',
    host: {
      id: 23126,
      name: 'Diana Prince',
    },
    attendeesCount: 25,
    maxAttendees: 40,
    meetingLink: 'https://meet.google.com/jjj-kkkk-llll',
    status: 'UPCOMING',
    state: 'NV',
  },
  {
    id: 5,
    title: 'Quarterly Business Review',
    startTime: '10:00',
    endTime: '12:00',
    date: '10-02-2024',
    host: {
      id: 23127,
      name: 'Ethan Hunt',
    },
    attendeesCount: 30,
    maxAttendees: 50,
    meetingLink: 'https://meet.google.com/mmm-nnnn-oooo',
    status: 'UPCOMING',
    state: 'NY',
  }
];

5th place, 4 points: Gemini. The result received after two prompts is similar to the result from the two above, so I won’t include it here. But after the first prompt, Gemini answered: “I can't assist you with that, as I'm only a language model and don't have the capacity to understand and respond.” which was strange and unexpected. Maybe someone wouldn't even write a second prompt after this answer; that’s why it's 5th place.

6th place, 3 points: ChatGPT 4. It took three prompts. After the first, the ongoing event was on the day before “today.” After asking to make it on the appropriate date, an ongoing event was missing. And after asking, “Where is the event with the status `ONGOING`?” Finally, an acceptable response was received. I am not including it here as it is pretty similar. And yes, it sometimes happens that an advanced model’s answers are worse compared to the basic one, as we can see in this example.

7th place, 2 points: Replit Basic. The result was received after 6 prompts. The order of items was wrong. This model has a problem with reading the context of the conversation. If you say to it, “Take the code from your previous answer and update this and that” it can say something like this: “I am sorry, but it appears that you have not provided the relevant code that needs to be transformed.” So I needed to write such prompt “Here is the code: [...], put them in an array in the correct order sorting by date ascending” to receive the acceptable answer.

8th place, 1 point: Grok. I gave up after 5 prompts. Grok had problems making the ongoing event with the date `08-02-2024`. I asked in different ways to fix that, but the ongoing event was still on `12-02-2024`, and I didn’t see that the model understood me.

2. Write documentation (JsDoc) for a React hook

Writing some complex functions or hooks, like in this case, often requires thorough documentation for others to easily understand what that piece of code does and how to use it. Lately, rather than composing documentation myself, I assign this task to AI assistants and simply review their output.

// Write me a JSDoc for the hook below:

import { useEffect, useRef } from "react";

export const useOutsideClick = (callback: () => void) => {
  const ref = useRef<HTMLDivElement>(null);

  useEffect(() => {
    const handleClickOutside = (event: MouseEvent) => {
      if (ref.current && !ref.current.contains(event.target as Node)) {
        callback();
      }
    };

    document.addEventListener("mousedown", handleClickOutside);

    return () => {
      document.removeEventListener("mousedown", handleClickOutside);
    };
  }, [callback]);

  return ref;
};

Solutions by places:

First place, 8 points: ChatGPT 4. It was one of the two chats that provided not only description, @param, @returns, but also the @example. Reading this makes it clear to me what the hook does and how to use it.

/**
 * Custom React hook that detects clicks outside of the specified component.
 * When a click is detected outside of the component referenced by the hook,
 * the provided callback function is executed.
 *
 * @param {() => void} callback - The callback function to be executed 
 * when a click is detected outside of the component.
 *
 * @returns {React.MutableRefObject<HTMLDivElement | null>} - 
 * A React ref object that should be attached to the component 
 * you want to monitor for outside clicks.
 *
 * @example
 * const MyComponent = () => {
 *   const callback = () => console.log('Clicked outside!');
 *   const ref = useOutsideClick(callback);
 *
 *   return <div ref={ref}>My Component</div>;
 * };
 *
 * Note: The hook uses the 'mousedown' event to detect outside clicks.
 */

Second place, 7 points: ChatGPT 3.5. The result is similar to the ChatGPT 4. It is easy to understand the logic of this hook. The only issue that made it second place was that the result had a small syntax error (the JsDoc comment was closed earlier than needed), so I could not just copy-paste it to my code to make it work. After two more prompts, the error was fixed. Even with 2 more prompts, I still place this one in second place as the syntax error was not a big deal, and it brings more value than the below ones (which worked after 1 prompt).

/**
 * A custom React hook that provides a ref to track clicks outside 
 * of a specified element
 * and invokes a callback function when such clicks occur.
 *
 * @param {() => void} callback - The function to be called when a click 
 * occurs outside the specified element.
 *
 * @returns {React.RefObject<HTMLElement>} A React ref object 
 * that should be attached to the element you want 
 * to track clicks outside of. Use this ref in your JSX to specify 
 * the element to monitor for outside clicks.
 *
 * @example
 * // Usage in a functional component
 * const App = () => {
 *   const handleClickOutside = () => {
 *     // Your code to handle outside click event
 *   };
 *
 *   const elementRef = useOutsideClick(handleClickOutside);
 *
 *   return (
 *     <div ref={elementRef}>
 *       {`Content to monitor for outside clicks`}
 *     </div>
 *   );
 * };
 */

Third place, 6 points: Gemini, Gemini Advanced, Replit Basic, Replit Advanced, Copilot Chat. All these assistants provided similar results, with description, @param and @returns but without the @example. I could ask them to add an example, but I intended to analyze their ability to provide the best result without specifying all the details.

// Gemini
/**
 * @description Custom React hook to detect clicks outside 
 * of a specific element.
 * @param {() => void} callback - A function to be executed when a click
 * occurs outside the element.
 * @returns {React.RefObject<HTMLDivElement>} A ref object to be attached
 * to the target element.
 */


// Gemini advanced
/**
 * A React hook that detects clicks outside of a specified DOM element.
 *
 * @param {() => void} callback - A function to be executed when a click
 * occurs outside the referenced element.
 * @returns {React.RefObject<HTMLDivElement>} - A ref object that should be
 * attached to the element you want to monitor for outside clicks.
 */


// Replit Basic
/**
 * A custom React hook that triggers a callback when a click event occurs 
 * outside the referenced element.
 * @param {() => void} callback - The function to be called when 
 * the outside click event is detected.
 * @returns {React.RefObject<HTMLDivElement>} - A ref object representing
 * the HTML div element to be referenced.
 */


// Replit Advanced
/**
 * Hook that alerts clicks outside of the passed ref's element.
 *
 * @param {() => void} callback Function to call on outside click.
 * @returns {React.MutableRefObject<HTMLDivElement | null>} A React ref object
 * with the current property pointing to a DOM element or null.
 */


// Copilot Chat
/**
 * Hook that alerts clicks outside of the passed ref
 * @param {() => void} callback - Callback function to be executed 
 * when clicked outside
 * @returns {React.MutableRefObject} - Ref to be attached to the element 
 * to detect outside clicks
 */

4th place, 5 points: Grok. Compared to the place above, Grok hasn’t provided the hook description and added less detailed typing to the @param and @returns.

/**
 * useOutsideClick hook
 *
 * @param {function} callback - A callback function to be executed when 
 * a click occurs outside of the referenced element.
 * @returns {object} ref - A React ref object for the target element.
 */

3. Write a function that does some task and covers it with unit tests

In this case, I ask chatbots to write a function that performs data transformation and to cover it with unit tests. I often use chatbots when I have input and output data and need to implement transformation logic. Sometimes they can provide me with the final solution or a good starting point that needs slight modification.

Task:


I have items as JS array:
```js
const items = [
  {
    id: 1,
    title: "Item 1",
  },
  {
    id: 2,
    title: "Item 2",
    archived: true,
  },
  {
    id: 3,
    title: "Item 3",
    subItems: [
      {
        id: 3.1,
        title: "SubItem 3.1",
      },
      {
        id: 3.2,
        title: "SubItem 3.2",
      },
      {
        id: 3.3,
        title: "SubItem 3.3",
        archived: true,
      },
    ],
  },
  {
    id: 4,
    title: "Item 4",
    archived: true,
    subItems: [
      {
        id: 4.1,
        title: "SubItem 4.1",
      },
    ],
  },
];
```


Write me a function that will return data in such format:
```
{ 
  items: [], // items transformed
  total: 100,
}
```
Transformation rules:
1. if item is archived, skip it and skip it's subItems
2. if item has subItems, skip it, but add subItems to the result. if subItem is archived, skip it.
3. display the total count of the resulting items in the `total` field.

Also write a unit test (using jest) for this function.

Solutions by places:

First place, 8 points: Copilot Chat, Grok and Replit Advanced. The solutions are similar, and the function and test work from the first try. Maybe some of them are slightly worse regarding performance, but I won’t consider it here as I didn’t ask for performance specifically.

// Copilot Chat
export function transformItems(items) {
  let result = { items: [], total: 0 };

  items.forEach(item => {
    if (item.archived) {
      return;
    }

    if (item.subItems) {
      item.subItems.forEach(subItem => {
        if (!subItem.archived) {
          result.items.push(subItem);
          result.total += 1;
        }
      });
    } else {
      result.items.push(item);
      result.total += 1;
    }
  });

  return result;
}

// Grok
export function transformItems(items) {
  const transformedItems = [];
  let total = 0;

  for (const item of items) {
    if (item.archived) {
      continue;
    }

    if (item.subItems) {
      const subItems = item.subItems.filter(subItem => !subItem.archived);
      transformedItems.push(...subItems);
      total += subItems.length;
    } else {
      transformedItems.push(item);
      total++;
    }
  }

  return { items: transformedItems, total };
}

// Replit Advanced
export function transformItems(originalItems) {
  let itemCount = 0;
  const transformedItems = originalItems.reduce((filteredItems, item) => {
    if (item.archived) return filteredItems;

    if (item.subItems && item.subItems.length > 0) {
      const activeSubItems = item.subItems.filter(subItem => !subItem.archived);
      itemCount += activeSubItems.length;
      return filteredItems.concat(activeSubItems);
    }

    itemCount++;
    filteredItems.push(item);
    return filteredItems;
  }, []);

  return {
    items: transformedItems,
    total: itemCount,
  };
}

Tests from the above chatbots were similar. They passed the input data I provided to the function and checked that the result was the same as the expected result, so I included only one example here.

describe('transformItems', () => {
  it('transforms items according to the rules specified', () => {
    const items = [
    // input data from the task I've provided 
  ];

    const expectedResult = {
      items: [
        {
          id: 1,
          title: "Item 1",
        },
        {
          id: 3.1,
          title: "SubItem 3.1",
        },
        {
          id: 3.2,
          title: "SubItem 3.2",
        },
      ],
      total: 3,
    };

    expect(transformItemsReplAdv(items)).toEqual(expectedResult);
  });
});

Second place, 7 points: Replit Basic. The solution also worked after the first prompt. The only reason it is worse than the first place is that it covers an extra level of nesting for `subItems`, which I didn’t ask for. So, with this solution, there would be extra dead code that wouldn’t be used. The test was very similar to those in the first place, so I didn't include it here. 

export function transformItems(items) {
  let transformedItems = [];
  let totalCount = 0;

  function transform(item) {
    if (!item.archived) {
      totalCount++;
// NOTE: tries to find subItems for subItems 
// which is not possible as per tas descripton
      if (item.subItems) {
        item.subItems.forEach((subItem) => {
          if (!subItem.archived) {
            transformedItems.push(subItem);
            totalCount++;
          }
        });
      }
    }
  }

  items.forEach((item) => {
    if (!item.archived && !item.subItems) {
      transformedItems.push(item);
      totalCount++;
    }
    if (item.subItems) {
      transform(item);
    }
  });

  return {
    items: transformedItems,
    total: totalCount,
  };
}

Third place, 6 points: ChatGPT 4. The function worked fine from the first prompt, and the solution was simple. However, the test’s expected result was a little bit wrong (one item in the expected result was missing), so I needed one more prompt to make the test work. I am not including the test here, as it is similar to others.

export function transformItems(items) {
  let transformedItems = [];

  items.forEach(item => {
    if (item.archived) {
      // Skip archived items and their subItems
      return;
    }

    if (item.subItems && item.subItems.length > 0) {
      // Skip the item but add non-archived subItems
      item.subItems.forEach(subItem => {
        if (!subItem.archived) {
          transformedItems.push(subItem);
        }
      });
    } else {
      // Add the item if it's not archived and has no subItems
      transformedItems.push(item);
    }
  });

  return {
    items: transformedItems,
    total: transformedItems.length
  };
}

4th place, 5 points: Gemini. The function also returned the expected result after the first prompt, though its code looks more complex compared to other results:

export function transformItems(items) {
  const transformedItems = [];
  let totalCount = 0;

  items.forEach((item) => {
    if (!item.archived) {
      if (item.subItems) {
        totalCount += transformSubItems(item.subItems, transformedItems);
      } else {
        transformedItems.push(item);
        totalCount++;
      }
    }
  });

  return { items: transformedItems, total: totalCount };
}

function transformSubItems(subItems, transformedItems) {
  let count = 0;
  subItems.forEach((subItem) => {
    if (!subItem.archived) {
      transformedItems.push(subItem);
      count++;
    }
  });
  return count;
}

The test was not working after the first result by copy-pasting it because the test was this:

test("transforms items correctly", () => {
  const items = [
    // ... your items array here
  ];

This is the reason why I do not like Gemini very much because it often does not send you the full answers even if you ask for them. Instead, it sends some part and says 'copy this, paste it there, or replace something with it'. Often, it's not convenient. So, because the function code was more complex and the second prompt for the test, therefore the 4th place.

5th place, 4 points: Gemini Advanced. This is another case when the advanced model works worse than the basic one. The function was returning the expected result after the first prompt and was looking good:

export function transformItems(items) {
  const transformedItems = [];

  // Helper function to recursively process items and subItems
  function processItems(items) {
    items.forEach((item) => {
      if (item.archived) return; // Rule 1

      if (item.subItems) {
        processItems(item.subItems); // Rule 2
      } else {
        transformedItems.push(item);
      }
    });
  }

  processItems(items);

  return {
    items: transformedItems,
    total: transformedItems.length, // Rule 3
  };
}

But the test was not working, the expected result was wrong, and it contained syntax errors: `id: 1.2.1` (`1.2.1` is not a valid number, and such id shouldn’t even be there). After I asked to fix syntax errors, it replied with `To fix the SyntaxErrors, you'll need to know the specific errors being reported. Here's a breakdown of common syntax errors that might occur in Jest tests and how to fix them:` and started some theory writing about possible errors. That is funny because it should know the code it sent me. After saying: `can you find these errors by reading the test file code you've provided?` it was able to send the valid test below. But, again, as you can see, it didn’t cover the archived items in the test, so I would need more prompts to get the test with full coverage.

const { transformItems } = require('./your-file-name'); 

describe('transformItems', () => {
  // ... (Your existing tests: basic handling, archived items)

  it('should handle items with nested subItems', () => {
    const input = [
      {
        id: 1,
        title: 'Item 1',
        subItems: [
          { id: 1.1, title: 'SubItem 1.1' },
          {
            id: 1.2,
            title: 'SubItem 1.2',
            subItems: [{ id: '1.2.1', title: 'SubItem 1.2.1' }], // Valid ID format
          },
        ],
      },
    ];
    const expected = {
      items: [
        { id: 1.1, title: 'SubItem 1.1' },
        { id: '1.2.1', title: 'SubItem 1.2.1' }, // Valid ID format
      ],
      total: 2,
    };

    const result = transformItems(input);
    expect(result).toEqual(expected);
  });
});

6th place, 3 points: Chat GPT 3.5. The worst result in this task. In the first answer, both the function and the test worked wrong. The function returned an extra item in the result – `{ id: 3, title: 'Item 3' }`, which should have been skipped because it had archived sub-items, and the function code was complex. Also, in the test, there was an extra item. I’ve made 5 more prompts asking to make the function work, but it always had some issues. So, I gave up after making 6 prompts because I didn’t see any progress.

4. Implement a Typescript interface for the Javascript object provided

I use AI assistants quite often for such tasks. For example, I must implement a typescript interface on the front end while integrating some BE API. It is convenient to copy the object received from the backend and ask a chatbot to implement an interface based on it. I do not always receive the final result, but at least have a good starting point that requires slight modification.

// Create me TS interface for the `absence` below

export const absence = {
  id: 123312,
  type: "Vacation",
  requireApproval: true,
  user: {
    id: 123,
    name: "John Doe",
    email: "johndoe@gmail.com"
  },
  schedules: [
    {
      id: 123,
      date: "2024-02-01",
      start: "09:00",
      end: "17:00"
    },
    {
      id: 123,
      date: "2024-02-02",
      start: "09:00",
      end: "18:00"
    }
  ],
  assignee: {
    id: 123,
    name: "James Smith",
    email: "jamessmith@gmail.com",
  }
};

Solutions by places:

First place, 8 points: Gemini. The expected result is received after the first prompt. The interface was immediately broken into pieces. There were suggestions regarding using Date and Time types. Here is the result:

interface Absence {
  id: number;
  type: string;
  requireApproval: boolean;
  user: User;
  schedules: Schedule[];
  assignee: User;
}

interface User {
  id: number;
  name: string;
  email: string;
}

interface Schedule {
  id: number;
  // Consider using Date type for stricter validation
  date: string;  
  // Consider using Time type or a custom time format for stricter validation 
  start: string; 
  // Consider using Time type or a custom time format for stricter validation
  end: string; 
}

Second place, 7 points: Copilot Chat, Grok, ChatGPT 4, and Replit Advanced. These four chatbots produced almost the same result. The only difference was in the order of these interfaces in the response, but it doesn’t matter. These responses are good but worse than the first place because there are no suggestions about Date and Time types, which, perhaps, I wouldn't need, but at least it would be good to know.

interface User {
  id: number;
  name: string;
  email: string;
}

interface Schedule {
  id: number;
  date: string;
  start: string;
  end: string;
}

interface Absence {
  id: number;
  type: string;
  requireApproval: boolean;
  user: User;
  schedules: Schedule[];
  assignee: User;
}

Third place, 6 points: Chat GPT 3.5. After the first prompt, the result was in one interface without User and Schedule reused, but after the second prompt, the result was the same as for others above.

4th place, 5 points: Replit Basic. After the first prompt, the result was the same as in Chat GPT 3.5. The interface was not broken down into pieces. However, I needed two more prompts to separate them because, after the first, it only sent me part of the final result, not the whole.

5th place, 4 points: Gemini Advanced. After the first prompt, the interface was broken into pieces, but it was invalid because the assignee field was optional: assignee? as can be seen below. But it was clear from the task that it should be required. A second prompt was needed to make it required.

...
interface Absence {
  id: number;
  type: string; // Consider an enum for absence types for better type safety
  requireApproval: boolean;
  user: User;
  schedules: Schedule[];
  assignee?: User;  // Make assignee optional with '?'
}

5. Perform code refactoring

When I need to refactor some code and am lazy that day, I can ask an AI chatbot to help me. Sometimes, it does well and sends me code refactored exactly as I need. In this case, I created a function with 3 loops over the array of items, which is not optimal, and asked to refactor it to make the 1 loop only.

Task:

// The function below does 3 loops over the data array to transform it. 
// Implement me refactored function that will do the same in a single loop.
// No other modifications are needed.

```
const transformData = (data) => {
  const withoutArchived = data.filter((item) => item.status !== "ARCHIVED");
  const withoutCancelled = withoutArchived.filter(
    (item) => item.status !== "CANCELLED",
  );
  const transformed = withoutCancelled.map((item) => {
    return {
      ...item,
      hostId: item.host.id,
      userFullName: `${item.host.firstName} ${item.host.lastName}`,
    };
  });

  return transformed;
};
```

Solutions by places:

This seemed an easy task for the chatbots because all except one did it well after the first prompt, so we have only two places here.

First place, 8 points: Copilot chat, Chat GPT 3.5, Chat GPT 4,  Gemini Advanced, Grok, Replit Basic, Replit Advanced. As can be seen below, most of the chatbots implemented this task with Array.reduce method and one with for of loop. The results had slight variations, but all implemented the task well.

// Copilot Chat, Gemini Advanced, Grok, Replit Basic 
// (some solutions had slight variations that didn't matter)

const transformData = (data) => {
  return data.reduce((acc, item) => {
    if (item.status !== "ARCHIVED" && item.status !== "CANCELLED") {
      acc.push({
        ...item,
        hostId: item.host.id,
        userFullName: `${item.host.firstName} ${item.host.lastName}`,
      });
    }
    return acc;
  }, []);
};

// Replit Advanced
const transformData = (data) => {
  return data.reduce((acc, item) => {
    // Skip items with a status of "ARCHIVED" or "CANCELLED"
    if (item.status === "ARCHIVED" || item.status === "CANCELLED") {
      return acc;
    }

    // Transform and add the item to the accumulator
    acc.push({
      ...item,
      hostId: item.host.id,
      userFullName: `${item.host.firstName} ${item.host.lastName}`,
    });

    return acc;
  }, []);
};

// Chat GPT 4
const transformDataRefactored = (data) => {
  return data.reduce((accumulator, item) => {
    // Skip items with status ARCHIVED or CANCELLED
    if (item.status === "ARCHIVED" || item.status === "CANCELLED") {
      return accumulator;
    }

    // Transform the item
    const transformedItem = {
      ...item,
      hostId: item.host.id,
      userFullName: `${item.host.firstName} ${item.host.lastName}`,
    };

    // Add the transformed item to the accumulator
    accumulator.push(transformedItem);
    return accumulator;
  }, []); // Initialize the accumulator as an empty array
};

// Chat GPT 3.5
const transformData = (data) => {
  const transformed = [];

  for (const item of data) {
    if (item.status !== "ARCHIVED" && item.status !== "CANCELLED") {
      transformed.push({
        ...item,
        hostId: item.host.id,
        userFullName: `${item.host.firstName} ${item.host.lastName}`,
      });
    }
  }

  return transformed;
};

Second place, 7 points: Gemini. After the first prompt, Gemini returned the result with two loops. I needed to ask again to make it with one loop to get the desired result.

// 1 After first prompt:
const transformData = (data) => {
  return data.filter(
    (item) => item.status !== "ARCHIVED" && item.status !== "CANCELLED"
  ).map((item) => {
    return {
      ...item,
      hostId: item.host.id,
      userFullName: `${item.host.firstName} ${item.host.lastName}`,
    };
  });
};

// 2
// Returned result with Array.reduce similar to others in the first place.

6. Implement a React UI component with styles and animation

Now it is time for the task where you will see not only the code but also the final result. When I need to implement a component with some logic, like animation or scrolling behavior, I can ask an AI assistant for help. I agree that in many cases, an AI tool won’t be able to create a production-ready component because you could have your own design system with a color palette, sizes, etc., but at least for me, it often gives good ideas. 

Task:

Implement a React component for credit card using javascript:
- the card should have a width of 300px and height of 150px and some 
border radius like in real card
- background should be black, text color - white
- on the front side display the card number, card holder name, expiration date.
- on the back side, display the CVV and magnetic stripe 
- make the styles so that the card looks similar to the real card. 
- on hovering the card implement the flip animation, so that the back side 
will be visible. Make the animation smooth and realistic
- For styles use css modules with sass
- do not use any external libraries except react, node-sass that are 
already installed in the project

Solutions by places:

First place, 8 points: Replit Advanced

card-repl-advanced

Replit Advanced took first place because the solution met all the task's requirements. For the results below I will state items that were worse than in this example. Here is the code for the solution:

import React from 'react';
import styles from './CreditCard.module.scss';

const CreditCard = ({ cardNumber, cardHolder, expirationDate, cvv }) => {
  return (
    <div className={styles.cardContainer}>
      <div className={styles.cardFront}>
        <div className={styles.cardNumber}>{cardNumber}</div>
        <div className={styles.cardHolder}>{cardHolder}</div>
        <div className={styles.expirationDate}>{expirationDate}</div>
      </div>
      <div className={styles.cardBack}>
        <div className={styles.cvv}>{cvv}</div>
        <div className={styles.magneticStripe}></div>
      </div>
    </div>
  );
};

export default CreditCard;

// Styles:
.cardContainer {
  width: 300px;
  height: 150px;
  border-radius: 10px;
  perspective: 1000px;
  &:hover .cardFront {
    transform: rotateY(180deg);
  }
  &:hover .cardBack {
    transform: rotateY(0deg);
  }
}
.cardFront, .cardBack {
  width: 100%;
  height: 100%;
  background-color: black;
  color: white;
  position: absolute;
  backface-visibility: hidden;
  border-radius: 10px;
  display: flex;
  flex-direction: column;
  justify-content: center;
  align-items: center;
  transition: transform 0.6s linear;
}
.cardBack {
  transform: rotateY(-180deg);
}
.magneticStripe {
  width: 90%;
  height: 20px;
  background-color: grey;
  margin-top: 5px;
}

You can also learn something new using AI assistants like I did in this case. Not working too much with CSS animations, I’ve never heard about such CSS property as `perspective` that is used here, thanks to which the final solution looks the most real and three-dimensional.

Second place, 7 points: Chat GPT 3.5.

GPT 3.5

The solution looks similar to the one taking first place; all requirements are implemented also, but two minor things make it second place:

  • the content jumps a little bit when returning to the front side;
  • And the magnetic stripe looks more like a barcode.

This is one more example when the basic model gives a better response than the advanced one, both in this case and with Gemini, as you will see below. Here is the solution code:

import React from 'react';
import styles from './CreditCard.module.scss';

const CreditCard = ({ cardNumber, cardHolder, expirationDate, cvv }) => {
  return (
    <div className={styles.creditCard}>
      <div className={`${styles.card} ${styles.front}`}>
        <div className={styles.cardNumber}>{cardNumber}</div>
        <div className={styles.cardHolder}>{cardHolder}</div>
        <div className={styles.expirationDate}>{expirationDate}</div>
      </div>
      <div className={`${styles.card} ${styles.back}`}>
        <div className={styles.cvv}>{cvv}</div>
        <div className={styles.magneticStripe}></div>
      </div>
    </div>
  );
};

export default CreditCard;

// Styles
.creditCard {
  width: 300px;
  height: 150px;
  border-radius: 15px;
  perspective: 1000px;
}

.card {
  position: absolute;
  width: 100%;
  height: 100%;
  border-radius: 15px;
  background: black;
  color: white;
  display: flex;
  flex-direction: column;
  justify-content: center;
  align-items: center;
  backface-visibility: hidden;
  transition: transform 0.5s;
}

.front {
  z-index: 2;
}

.back {
  transform: rotateY(180deg);
}

.creditCard:hover .front {
  transform: rotateY(-180deg);
}

.creditCard:hover .back {
  transform: rotateY(0deg);
}

.cardNumber {
  font-size: 1.2rem;
  margin-bottom: 10px;
}

.cardHolder {
  font-size: 1rem;
  margin-bottom: 10px;
}

.expirationDate {
  font-size: 1rem;
}

.cvv {
  font-size: 1.2rem;
}

.magneticStripe {
  width: 80%;
  height: 20px;
  background: repeating-linear-gradient(90deg, white, white 5px, black 5px, black 10px);
}

Third place, 6 points: Copilot Chat

card-copilot-chat

Copilot Chat implemented almost all requirements except smooth animation, which can be seen in the image. When moving the cursor over the card, the animation becomes intermittent. This is happening because the hover effect is added not to the card wrapper, which is immovable and always has the same dimensions (like in the examples above), but to the card body itself, which is being transformed. The same issue is found in a few examples below.

4th place, 5 points: Chat GPT 4

card-gpt-4

This is a similar solution to the Copilot chat, which also has this intermittent animation issue for the same reason. Another requirement not implemented here was size — Chat GPT 4 forgot that the block size could increase because of the padding.

5th place, 4 points: Gemini

card-gemini

Here are the issues that make it 5th place:

  • intermittent animation;
  • no side padding;
  • when the card is flipped, you will see the mirrored front side instead of the backside. It is actually implemented in the JSX code, but Gemini failed to apply styles to make the back side visible.

6th place 3 points: Replit Basic

card-replit-basic

Replit Basic took 6th place because it had no paddings at all. The animation was implemented but did not work because the wrong CSS selector was used for hover. And one more important thing in terms of the code – there were no props, and all the dynamic content was hardcoded in the component:

const CreditCard = () => {
  return (
    <div className={`${styles.creditCard} ${styles.flipCard}`}>
      <div className={`${styles.flipCardInner}`}>
        <div className={`${styles.front}`}>
          <div className={styles.cardNumber}>1234 5678 9012 3456</div>
          <div className={styles.cardHolder}>John Doe</div>
          <div className={styles.expiry}>10/23</div>
        </div>
        <div className={`${styles.back}`}>
          <div className={styles.cvv}>456</div>
          <div className={styles.magneticStripe}></div>
        </div>
      </div>
    </div>
  );
};

I didn't ask for implementing props, but they should know it apriori if we call them artificial intelligence. And yes, all the chatbots above this implemented the component with props.

7th place, 2 points: Gemini

card-gemini-advanced

That’s a pretty good solution, right? It took me two prompts to get this because, after the first prompt, I received this response: “I'm a language model and don't have the capacity to help with that.”. A basic Gemini was noticed to answer something similar in one of the tasks above.

By the way, all the solutions above were received after the first prompt. More prompts were unnecessary because the first one was enough to put them in their places.

You could ask why the card is empty. Here is what Gemini Advanced answered when I made the second prompt and asked it to do the task:

...
​​<div className="card-front">
  {/* Front content - Card Number, Name, Expiry */}
</div>
<div className="card-back">
  {/* Back content - CVV, Magnetic Stripe */}
</div>
...

Some would say: "Why should you pay for Gemini Advanced if the basic model did this task better?" I don't have an answer.

8th place, 1 point: Grok. This is the only example of this task with nothing to show. Firstly, because in the first response, the styles were in `module.scss` file, but in the JSX, the classes for regular CSS were used.

...
<div className={`credit-card ${isFlipped ? 'flipped' : ''}`}>
...

After I asked to fix that, Grok broke down and started to answer with some wrong syntax. See many “<” instead of the “<”

return (
    &lt;div
  className={styles.creditCard}
  onMouseEnter={handleFlip}
  onMouseLeave={handleFlip}
    >
    &lt;div className={styles.frontSide}>
    &lt;div className={styles.cardNumber}>
    {/* Display card number */}
    &lt;/div>
    &lt;div className={styles.cardHolderName}>

Total points and results:

  • Copilot Chat: 7 + 6 + 8 + 7 + 8 + 6 = 42
  • Replit Advanced: 5 + 6 + 8 + 7 + 8 + 8 = 42
  • Chat GPT 4: 3 + 8 + 6 + 7 + 8 + 5 = 37
  • Chat GPT 3.5: 6 + 7 + 3 + 6 + 8 + 7 = 37
  • Gemini: 4 + 6 + 5 + 8 + 7 + 4 = 34
  • Gemini Advanced: 8 + 6 + 4 + 4 + 8 + 2 = 32
  •  Replit Basic: 2 + 6 + 7 + 5 + 8 + 3 = 31
  • Grok: 1 + 5 + 8 + 7 + 8 + 1 = 30

Insights from the comparative analysis of AI assistants

The results are pretty interesting. We do not have a clear winner here, as Copilot Chat and Replit Advanced have the same points and share first place. It is also interesting that both GPT 3.5 and GPT 4 models have the same number of points. Here are my thoughts after doing this research:

Copilot Chat (1st place)

Copilot Chat shares the first place with Replit Advanced, but Copilot Chat has some advantages if talking about development purposes:

  • Its subscription price is the lowest (10 USD per month as of now)
  • It is integrated into your IDE and also performs code completions. I use Copilot Chat during my day-to-day work, and its completions save me some time every day.

It also has some disadvantages compared to others:

  • As it is integrated into the IDE, it can be used there and only, compared to others that can be used via web interface or within a mobile app.

By the way, some people say on the internet that Copilot Chat is using some version of GPT-4, but I could not find confirmation of this on their official website.

Replit Advanced (1st place)

When I learned about Replit and tried their basic model, it was much worse than GPT 3.5, which I used often. So, it was a surprise that their advanced model often answers better than GPT 4. Still, this is a fact shown in the research above.

One advantage of Replit is that it allows the creation of online projects called Repls, where you can use the AI assistant. It is very useful when there is a need to prototype something and maybe share it with others.

Chat GPT 4 (2nd place)

I cannot say that GPT 4 is far worse than the two above, and it definitely has its advantages, but at least for some tasks, it answered worse. Some essential advantages are:

  • The possibility to create or use custom GPTs, “custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills”.
  • The possibility to create images with DALL·E. Also, hopefully, Sora (a model that creates videos from text instructions) will soon be available to all Plus users. These things are interesting, but I am not sure they will help much during the front-end development process.

Chat GPT 3.5 (2nd place, 1st place among free LLMs)

Chat GPT 3.5 also takes second place with the same points as GPT 4. It is in first place among the free LLMs. This motivates me to thank Open AI for making such a powerful model publicly available. As GPT 3.5 and GPT 4 got the same points, one may ask: Why pay for GPT 4? This makes sense because GPT 3.5 performs well. However, GPT 4 answers are still better for some tasks, and its advantages are stated above.

Gemini (3rd place, 2nd place among free LLMs)

It is really fun that the basic model performed better than their advanced one, maybe because the advanced model was published recently and needs polishing.
If I don’t want to pay for AI assistants, I would start with Chat GPT 3.5 and then go to Gemini if I am unsatisfied with an answer.

Gemini Advanced (4th place)

Though Gemini Advanced showed the best result in some tasks, I personally still wouldn’t use it yet because, in my opinion, it needs updates:

  • It was strange to see such answers: “I'm a language model and don't have the capacity to help with that.” (A similar answer was from Gemini)
  • I didn’t like it much when making straightforward prompts (like in task 6) instead of doing the task entirely, the model answered with code comments that something should be done here.

Replit Basic (5th place, 3rd place among free LLMs)

There is a much more significant difference between the Replit basic and advanced models compared to Chat GPT and Gemini. I do not consider the Replit Basic model to be helpful much during the front-end development process, though it was able to solve some easy tasks.

Grok (6th place)

As Grok is a paid chatbot with the worst results, I would also not consider it an AI assistant for a front-end developer. 

And for those who reached here, I have a free tip - alternatives for the AI chatbots' original frontends:

If paid AI tools are helpful for you but you think they are too expensive. There is good news – you can use alternative frontends. Better ChatGPT is one of the examples that works with Open AI LLMs. For example, if you want to use GPT-4 without a $20/month subscription, the instructions are very simple:

Sign in to your Open AI platform account, top up your balance at least $5, create an API key, populate this key to the Better ChatGPT API settings and voila, you can use GPT-4 without a subscription but by paying for exactly as much as you use. As per my usage, it now costs me $2/month instead of $20.

If you have additional tasks that AI chatbots assist you with during your day-to-day work, I'd love to hear about them! Share your experiences with us by filling out the form below.

Related topics

Share

Best AI chatbots for front-end developers R5dkbapm
Igor Buts
Senior Front-End Engineer at Star

Igor is a seasoned Senior Front-End Engineer with over 8 years of experience in web development. With a strong background in JavaScript development, Igor has contributed to various projects, demonstrating his expertise and dedication in the field.

Harness the future of technologies

Star uses top-notch technology solutions to create innovative digital experiences for our clients.

Explore our work
Loading...
North America, Big

We are truly global.
Explore our locations around the world.

North America, Big

Star is a global technology consulting firm that seamlessly integrates strategy, design and engineering as an end-to-end partner on its clients’ digital journeys. Star’s unique approach helps rapidly expanding startups and established enterprises reach their endgames while prioritizing empathy for the end-users, society, and the planet.

Insights & inspiration

Receive the latest Star insights on trends, technologies and endgame-driven approaches straight to your inbox.

Contacts

Or write us at hello@star.global

© Copyright Star 2024. All rights reserved Privacy Policy

plus iconminus iconarrow icon pointing rightarrow icon pointing rightarrow icon pointing downarrow icon pointing leftarrow icon pointing toparrow icon pointing top rightPlay iconarrow pointing right in a circleDownload iconResume iconCross iconActive Badge iconInactive Badge iconFocused Badge iconDropdown Arrow iconQuestion Mark iconFacebook logoLinkedin logoLinkedIn logoFacebook logoTwitter logoInstagram logo