11+ Pyramid Jobs in Hyderabad | Pyramid Job openings in Hyderabad
Apply to 11+ Pyramid Jobs in Hyderabad on CutShort.io. Explore the latest Pyramid Job opportunities across top companies like Google, Amazon & Adobe.
at Copilot GTM
Role Overview:
The AI Research Intern will focus on natural language processing (NLP) and working with large language models (LLMs). They will assist in refining and testing the retrieval-augmented generation (RAG) system for CopilotGTM.
Key Responsibilities:
- Assist in developing and refining NLP models to answer customer queries.
- Research and implement improvements to minimize hallucinations in the LLM.
- Test RAG model configurations and provide feedback to improve accuracy.
- Work with the engineering team to improve real-time product recommendations and responses.
- Analyze datasets and fine-tune models for specific use cases (e.g., sales, compliance).
Skills Required:
- Strong understanding of NLP and familiarity with LLMs (GPT, BERT, etc.).
- Basic coding experience in Python.
- Knowledge of data handling, data processing, and model training.
- Problem-solving ability and eagerness to experiment with new techniques.
Preferred:
- Experience with libraries like Hugging Face, PyTorch, or TensorFlow.
- Familiarity with retrieval-augmented generation (RAG) systems.
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Job Description
- 5- 8 yrs IT industry experience preferably Banking domain
- Strong Python Skills and good understanding of Java and Microservices
- Should have handled banking customers and exposure to Production support processes
- Good in database and Pl/SQL – Ability to write SQL as and when needed
- Good attitude and communication
- WFH not allowed – Working from Bank premises as per bank calendar in Saifabad or Hitech City – Saifabad Flexibility must
What are the Key Responsibilities:
- Responsibilities include writing and testing code, debugging programs, and integrating applications with third-party web services.
- Write effective, scalable code
- Develop back-end components to improve responsiveness and overall performance
- Integrate user-facing elements into applications
- Improve functionality of existing systems
- Implement security and data protection solutions
- Assess and prioritize feature requests
- Creates customized applications for smaller tasks to enhance website capability based on business needs
- Ensures web pages are functional across different browser types; conducts tests to verify user functionality
- Verifies compliance with accessibility standards
- Assists in resolving moderately complex production support problems
What are we looking for:
- 3+ Years of work experience as a Python Developer
- Expertise in at least one popular Python framework: Django
- Knowledge of NoSQL databases (Elastic search, MongoDB)
- Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
- Familiarity with Apache Kafka will give you an edge over others
- Good understanding of the operating system and networking concepts
- Good analytical and troubleshooting skills
- Graduation/Post Graduation in Computer Science / IT / Software Engineering
- Decent verbal and written communication skills to communicate with customers, support personnel, and management
Looking for Python lead/architect
Must Have:
- Able to architect a application from scratch.
- Able to refactor code
- Knowledge of Flask, DJango
- Team player
- Able to lead the team and guide them
- Deployment of code on Azure platform
Good to have:
- Knowledge of SqlAlchemy
DATAKYND seeks a Python Developer who can collaborate with our front end developers and designers to meet multiple project needs for our clients. We seek team player that can think beyond the code to provide recommendations and solutions focused on meeting client’s needs.
Responsibilities:
- Full-stack development
- Develop data migration, conversion, cleansing, retrieval tools and processes (ETL) using pandas
- Web Automation, Web crawlers and scrapers
- Data import/export formats for third-party applications (JSON/CSV)
- Integrations with third-party applications (REST API)
- Requirements analysis and providing solutions using Python and related tools.
- Supporting and maintaining existing Python scripts, applications and interfaces.
- Evaluating emerging open-source libraries and providing recommendations.
- Strong analytical and problem-solving skills is necessary
Primary Technical and Functional Skills:
- Python 3.x , Web frameworks (Django, Flask)
- JS, HTML, JavaScript, CSS, Bootstrap
- PostgreSQL, MongoDB, SQL
- Multithreading, Logging, Email, Schedulers
- Third-party integration, Rest APIs and microservice
- JSON/CSV
Secondary Technical and Functional Skills:
- Basics of Linux, Nginx, Gunicorn, Apache, Wsgi
- Github , Docker
- Cloud platforms such as GCP, Azure or AWS
- Design and workflow document preparation
Desired profile:
- Should have excellent verbal and written communication skill
Job Description :-
- Have intermediate/advanced knowledge of Python.
- Hands-on experience with OOP in Python. Flask/Django framework, ORM with MySQL, MongoDB is a plus.
- Must have experience of writing shell scripts and configuration files. Should be proficient in bash.
- Should have excellent Linux administration capabilities.
- Working experience of SCM. Git is preferred.
- Should have knowledge of the basics of networking in Linux, and computer networks in general.
- Experience in engineering practices such as code refactoring, design patterns, design driven development, Continuous Integration.
- Understanding of Architecture of OpenStack/Kubernetes and good knowledge of standard client interfaces is a plus.
- Code contributed to OpenStack/Kubernetes community will be plus.
- Understanding of NFV and SDN domain will be plus.
Software Development Engineer – SDE 2.
As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.
Top Skills
You write high quality, maintainable, and robust code, often in Java or C++ or C#
You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.
Roles & Responsibilities
You solve problems at their root, stepping back to understand the broader context.
You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
You recognize and use design patterns to solve business problems.
You understand how operating systems work, perform and scale.
You continually align your work with Amazon’s business objectives and seek to deliver business value.
You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
You communicate clearly with your team and with other groups and listen effectively.
Skills & Experience
Bachelors or Masters in Computer Science or relevant technical field.
Experience in software development and full product life-cycle.
Excellent programming skills in any object-oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
Proficiency in SQL and data modeling.
As a Software Engineer at Quince, you'll be responsible for designing and building scalable infrastructure and build applications to solve some very interesting problems in the logistics and finance tech space.
Responsibilities:
- Design and architect solutions on the cloud for various business problems with workflow efficiency and scale in mind.
- Be on the forefront with the business team to learn, understand, identify and translate function requirements into technical opportunities.
- End-to-end ownership - from scoping the requirements to the final delivery of the solution with keen eye to details and quality.
- Build and improve logistics components for this innovative M2C supply-chain model.
- Build and maintain scalable ETL data pipelines.
Requirements:
- Bachelors/Masters/PhD in Computer Science or closely related subject.
- 1-5 years of experience in building software solutions.
- Good at data structures and their practical applications.
- Proficiency in Kotlin, Java, Python.
- Experience in deploying and maintaining applications on cloud platforms (Ex: AWS, Google cloud).
- Proficiency with SQL and databases - relational and/or nosql (Snowflakes, AWS RedShift, etc).
- Experience with messaging middleware such as Kafka is good to have.
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team